Attachment | Size |
---|---|

R script with data for MetaSEM forum.R | 13.91 KB |

R script with data for MetaSEM forum.pdf | 172.71 KB |

Dear Mike Cheung,

I am not a statistician and a beginner in meta-analysis. I'm struggling to perform an OSMASEM using the metaSEM r package. And I have a few questions.

1) I am not sure if the rate of missing data is not too high for some parameters and consequently if the results are reliable (and thus, if this research is publishable or not). Any suggestion if the rate of missing data is too high?

2) I hope I don't make any mistakes in my script and that I correctly performed the OSMASEM.

3) I use a categorical moderator. Is it appropriate to standardize this moderator? I think you wrote that moderator standardization improves numerical stability. should one not use standardization for continuous moderating variables only?

4) I have a moderating categorical variable (event valence) with two categories: "failure" vs "others" respectively coded -1 and 1.

The results of the OSMASEM with the moderator give several matrices. Is it right that:

A0 matrix gives betas for the model for the mean value of the moderator which may be meaningless (e.g., gender coded male -1 and female 1).

A1 matrix gives betas for the moderator effects on the parameters.

A0-A1 matrix gives in our case betas for the -1 ("failure") value of the moderator (-1 SD for continuous moderators)

A0+A1 matrix gives in our case betas for the +1 ("success") value of the moderator (+1 SD for continuous moderators)

5) I'm not sure why R2 can be higher than zero for a parameter (correlation) on which the moderator effect was not significant.

6) Imagine that I have a categorical moderator with three categories A, B and C coded 0, 1, and 2, respectively. Regarding the significance of the difference between the three categories, the OSMASEM with moderator tests the A category coded 0 versus the other two categories (B and C coded 1 and 2, respectively). Is it pertinent to change the code in order to test each category versus the other two? For example, with the same previous three categories A, B and C changing the code from 0, 1, and 2 to 1, 0, and 2 in order to test the difference between the category B (coded 0) versus the two others (A & C coded 1 and 2). I hope this last question is understandable.

Best regards,

L. B.

Dear L. B.,

1) Studies in TSSEM and OSMASEM are treated as if they are subjects in conventional SEM. It seems reasonable to apply traditional views on missing data here. You may get a better perspective on how the missing data patterns are with the following R code.

2) I have not checked the script in detail. It seems that the first few models work.

3) and 4) The inclusion of moderators is similar to what we do in testing the cross-level interaction in multilevel models. If you have a binary moderator, I do not understand why it is standardized. You may use one of the two options:

a. A0 (intercept) + A1 (0 or 1 indicator for one group). A0 and A1 represent the reference group and the difference between them.

b. A0 (all 0s) + A1 (0 or 1 indicator for one group) + A2 (0 or 1 indicator for another group). A1 and A2 represent the two groups.

If your moderator is a continuous variable, you may center (or standardize) it first. Then,

A0 is the estimated coefficients when the moderator is the average.

A1 is the estimated coefficients when the moderator increases 1 unit.

To make it easier for the interpretations, you may calculate the SD of your moderator, say sd.

Then you may calculate A0 – 1sd

A1 and A1 + 1sdA1.5) Unlike the concept in regression analysis, the idea of R2 in meta-analysis is similar to that in multilevel models. The R2 can be larger than 1 in meta-analysis and multilevel models. The presence of missing data in the meta-analysis may make the estimated R2 behaving even worse.

6) If you have a categorical variable with three levels, you should not treat it as 0, 1, and 2. You may create two dummy codes.

Best,

Mike