Attachment | Size |
---|---|

Dabiriyan.sav | 1.95 KB |

Dabiriyan.R | 11.14 KB |

Dear Mike & colleagues,

I conducted moderator analyses with OSMASEM with several moderators. When I performed OSMASEM with these moderators simultaneously and separately, it showed different estimates for moderators. Which one should be preferred?

I attached R-code and data.

Thanks for your time and patience.

Dear Dabiriyan,

This is similar to the case in multiple regression. The results are likely to be different when testing all predictors or one predictor at a time. In your case, only Individualism is statistically significant. You may focus on it.

Best,

Mike

Dear Mike,

Thank you very much for your reply. I really appreciate it.

Dear Mike,

I have a fake dataset I made myself to try out the OSMASEM models. There are missing correlations and missing values in the moderator. My questions are:

can OSMASEM handle missing values in moderator? the code with missing moderator values seems working, but how does the missing moderator values handled?

I followed your OSMASEM moderation code in ' MASEM on Nohe et al. (2015) data Suzanne Jak and Mike Cheung June 17, 2020 '. For the moderation effect, you seem to have 4 lines of results corresponding to the 4 paths in the Ax, so I expect I will have 3 lines of results since I have 3 paths in the Ax. However, I got only one line of result with the name 'moderated':

> summary(osmasem2)

Summary of moderating all

free parameters:

name matrix row col Estimate Std.Error A z value Pr(>|z|)

1 medONx A0 med x -0.2116346 14.247970 -0.014853667 0.9881489

2 yONx A0 y x 0.8599042 5.358775 0.160466578 0.8725135

3 yONmed A0 y med 0.2461819 26.169723 0.009407127 0.9924943

4 xWITHx S0 x x 0.2009616 25.820149 0.007783129 0.9937900

5 moderated Ax1 2 1 0.1000000 NA ! NA NA

6 Tau1_1 vecTau1 1 1 -2.3411371 6.558308 -0.356972730 0.7211122

7 Tau1_2 vecTau1 2 1 -0.8892047 15.058792 -0.059048869 0.9529132

8 Tau1_3 vecTau1 3 1 -1.0218680 13.109379 -0.077949382 0.9378683

The name of the term and the number of the terms are both not correct.

I guess the fake data caused the NAs, but I am not sure about the ' moderated ' term. I am using OpenMx version number: 2.18.1, whereas you used OpenMx version number: 2.17.4 , does that matter?

Thank you very much.

Hi Ya,

1) OSMASEM treats the moderators as definition variables. Therefore, NA is not allowed in the moderators.

2) I have made some changes in your script. Please see the attached one.

Best,

Mike

Dear Mike,

Is it possible to do subgroup/moderator analysis at the 2nd stage of MASEM using "wls()"?

Generally, is there an example, tutorial, or other resource to show how to do that in 2-stage metsSEM?

--Thank you

You can convert models to mxModel and conduct a multiple-group analysis, but calculating goodness-of-fit indices manually is necessary.

Thanks, so much. So, what you demonstrated is essentially the extent users can perform moderator analysis in the context of two-stage MASEM, right?

Also, is there a way to compare

`b1a`

and`b1b`

? (Also, is there a reason CIs are`NA`

?)Finally, can

`model1`

and`model2`

be the same (but with different data) to compare how all loadings change across the two models?The metaSEM mainly uses the OpenMx package to conduct meta-analyses. If you want to run analyses unavailable in metaSEM, you may need to use the OpenMx package.

It is a multiple-group SEM or a subgroup analysis in meta-analysis.

In the example, the created object "wls.model" is a MxModel object in OpenMx. You may perform standard analyses in OpenMx. For example,

1) Requesting CIs: mxRun(..., intervals=TRUE)

2) Comparing b1a and b1b: nested models comparison with a chi-square statistic.

3) Using the same model in model1 and model2: yes, you may replace model2 with model1.

Thanks so much.

Mike, is there a reason I can't see the path coefficient estimates for "Group2"?

The parameters are identical in both groups. Thus, only those in the first group are reported.

Thanks, I think you mean something like below, if full models are to be compared for each group?

But is there any shortcut to avoid repeating the exact same models with different suffixes (e.g.,

`b1a`

vs.`b1b`

)? For example, a shortcut using`lavaan2RAM(..., ngroups=2)`

etc.?Dear Mike,

I was wondering why I'm getting a warning in

`wls()`

saying:`The variances of the dependent variables in 'Smatrix' should be free.`

?If this is a harmful warning, how can I remove it?

"Decording" regresses on "Meta." If this is what you want, you may ignore the warning.

Thanks, Mike. Can you please clarify, why you "Decording" being regressed on "Meta" generates that warning?

The checkRAM() function triggers the warning, which runs a few basic checks on the RAM model. Specifically, the issue is related to the "Decoding" variable, which is a dependent variable, and its error variance should be assumed free. The checkRAM() function has identified this issue, but the wls() function is smart enough to handle it. Therefore, you may safely ignore the warning if this is your intended model.

Thanks so much, Mike. Just curious, how could I let the error variance for "Decoding" (dependent variable) to be free in the model syntax? Should I add

`'Decoding ~~ Decoding'`

?The metaSEM package uses the function lavaan2RAM(), which internally calls the lavaan::lavaanify() function, to convert the lavaan model to RAM specification. If you need to release the error variance of "Decoding," you can use the following code. If you are curious about why this step is necessary, please check the help manual of lavaan::lavaanify().

Mike, thanks, it seems that you also set

`std.lv=FALSE`

, which by default is`std.lv=TRUE`

. Although, the bottom line is that these warnings won't have any impact on the output as`wls()`

will let the error variances of the dep. variable be free.