Attachment | Size |
---|---|

ACE_models.R | 4.41 KB |

Hi OpenMx Community!

I just ran a single-trait ACE model, with one observed variable, and no definition variables. I used mxRefModels to get the Saturated and the Independence models. I wanted to see if I get similar values defining the Saturated model and the Independence model by hand.

My results differ basically in every aspect. Is my Saturated and Independence model definition wrong (see attached script), or is this a limitation of mxRefModels:

-Df is 6 vs 3: I guess mxRefModels includes A,C,E parameters, that is why it is 3 more. Should they be included in the Saturated model?

-My independence and saturated fit values is different, thus all fit statistics are different

#results of : summary(ACEFit, refModels = Sat_models) observed statistics: 198 estimated parameters: 4 degrees of freedom: 194 fit value ( units ): 736.6658 saturated fit value ( units ): 729.8184 number of observations: 99 chi-square: X2 ( df=6 ) = 6.847411, p = 0.3351895 Information Criteria: | df Penalty | Parameters Penalty | Sample-Size Adjusted AIC: 348.6658 744.6658 NA BIC: -154.7874 755.0463 742.4141 CFI: 0.9926078 TLI: 0.9975359 (also known as NNFI) RMSEA: 0.03777058 [95% CI (0, 0.1546974)] Prob(RMSEA <= 0.05): 0.4868861

#results of : summary(ACEFit, SaturatedLikelihood=SatSum$Minus2LogLikelihood , #SaturatedDoF=SatSum$degreesOfFreedom, #IndependenceLikelihood=IndepSum$Minus2LogLikelihood , #IndependenceDoF=IndepSum$degreesOfFreedom) observed statistics: 198 estimated parameters: 4 degrees of freedom: 194 fit value ( units ): 736.6658 saturated fit value ( units ): 731.998 number of observations: 99 chi-square: X2 ( df=3 ) = 4.667762, p = 0.1978056 Information Criteria: | df Penalty | Parameters Penalty | Sample-Size Adjusted AIC: 348.6658 744.6658 NA BIC: -154.7874 755.0463 742.4141 CFI: 0.9853227 TLI: 0.9902151 (also known as NNFI) RMSEA: 0.07493571 [95% CI (0, 0.2185226)] Prob(RMSEA <= 0.05): 0.2926299

Hi Cindy,

The difference between the saturated/independence models produced by

`mxRefModels`

and those made by you seems to be the means. The`mxRefModels`

function is estimating a separate mean for each twin: one for each of MZ1, MZ2, DZ1, and DZ2. You are estimating a single mean for the MZs and a single mean for the DZs. The function makes no special treatment of A, C, E components.I would be cautious interpreting your code because you refer to several different models that are not defined in it.

The reference models are made with respect to "twinACEFit" not "ACEFit". Then the summary is examined with respect to "tACEFit", not "twinACEFit" or "ACEFit". You didn't show code for these models, only "ACEFit". So I have no idea if the comparison is appropriate. At minimum, change the above code to

I defined the Saturated models as you insisted, and I got exactly the same results as

`mxRefModels`

! Thanks you so much for your help!So in case my original model only has one mean (all online scripts only calculate with one), should the Saturated model calculate with 4 means, one for each MZ1, MZ2, DZ1, and DZ2?

I’m so sorry for my sloppy code I attached! I corrected the mistakes, and attached it again.

For some data, I get „status code 6” when defining the Saturated model or the Independence by hand (with one mean). Rerunning the already fitted model results in a status code 0 indicating no problem, but my estimates don’t really change, and the fit value doesn’t either. Is this a problem?

Many - perhaps most - Status Code 6's are false alarms. Checking the result by refitting from the solution (as you have done) or from different starting values (as might be done with mxTryHard()) can help support the hypothesis that the solution found is a global minimum.

I reckon that there is nothing to worry about in this case.

And here's how ...

Whether your saturated model should have 1 or 4 means depends on the hypothesis you're trying to test. If you're testing whether the covariance structure of your model is close enough to the saturated covariance structure, then match the means structure: hence, if your model has one mean, then the saturated model against which you're testing should have one mean. If, on the other hand, you're testing whether your model matches the covariance and means structure of the saturated model, then the saturated model should have four means and your model should have as many means as you hypothesize are different.