I'm trying to get chi-square fit index of my univariate ACE model. I tried to use mxRefModels(ACEFit) and then use mxCompare between the mxRefModels output and my ACE model (ACEFit), but it returned NA for the p value

base comparison ep minus2LL df AIC diffLL diffdf p

Saturated twinACE 0 NA 0 NA NA NA NA

Saturated twinACE twinACE 6 879.7042 178 523.7042 NA 178 NA

Independence twinACE 0 NA 0 NA NA NA NA

Independence twinACE twinACE 6 879.7042 178 523.7042 NA 178 NA

-Can the mxRegModels funcion be used? Since in the help it states: "One potentially important limitation of the mxRefModels function is for behavioral genetics models."

-How should I modify previously posted (https://openmx.ssri.psu.edu/sites/default/files/UnivariateTwinAnalysis_MatrixRaw-3.R) ACE model definition to get the Saturated and Independence models if mxRefModels can not be used?

-How do I get df values that are similar to ones that I would get in other softwares (Mplus), because 178 seems a bit to much, does this alter the calculation of the chi-square fit index?

I would really appreciate anyone's help!!

Use mxRefModels like the example from the help page for it. Include it as an argument to summary, and everything should come out like you want.

The mxRefModels() function can be used for BG models. It's just a matter of using the function intelligently and knowing what it does rather than thinking that it magically fixes things. The saturated and independence models generated by mxRefModels() do not respect the nature of BG models. From my recollection of discussions with Mike Neale about the saturated and independence models appropriate for many BG models, the mxRefModels() function should be fine. Just know that it is not making any equality constraints across twins.

If you wanted, you could use mxRefModels to generate saturated and independence models and then add your own equality constraints across twins. If that's the baseline model you'd like to use, you can do that.

OpenMx distinguishes between the df of the model and the df of the chi-square. The df of the model for raw data are generally quite large: It's on the order of (number of rows) x (number of non-missing variables in each row). The df of the chi-square are the difference between the saturated model df and the hypothesized model df. This number is much smaller and is what Mplus reports as "the" df. You could notice from the example in the help page for mxRefModels that the df are 2485, but the chi square df are 5.

Thank you for your thorough response! I really appreciate it, it is now a lot clearer!

I'm currently getting negative chi-square values for some univariate ACE models and thus the chi-square fit p value is 1.00. I suppose there is some problem, since the fit can't be so good. My AIC and BIC and -2LL values are positive, but my RMSEA values (CI also) are 0. I would really appreciate if anyone has an idea what could be the problem!

Mike Hunter is probably the best person to answer this, but my suspicion is that numerical optimization of the saturated model is not successful. If you put the already-run saturated model through

`summary()`

, you might be able to see if the summary output includes a message about possible convergence problems (e.g., status RED).You could try using

`mxRefModels()`

with`run=FALSE`

. Then, run the saturated model through`. The syntax would look something like this:`

BTW, what does your output from

`mxVersion()`

look like?Thanks for your prompt response!

mxVersion() returns:

OpenMx version: 2.2.4 [GIT v2.2.4]

R version: R version 3.1.2 (2014-10-31)

Platform: x86_64-w64-mingw32

Default optimiser: SLSQP

Both the Saturated and the Independence model (...$Saturated$output$status and ...$Independence$output$status) convergence code and status are 0, and the summary didn't return any convergence problems

Running the above mentioned script both the Saturated and the Independence models returned: "Solution found", and reports the best starting values. Running the summary command with refModels included the same problems occurs

chi-square: X2 ( df=4 ) = -33.39795, p = 1

Does anyone have an idea, what might be the problem? I checked the estimated a,c,e parameters, could it be that e returned a negative estimate, and that is what is causing the problem? Should I bound the variables a,c,e to be positive, who exactly can this be done?

Oops, I forgot that

`mxRefModels()`

doesn't automatically create reasonable reference models for twin analyses--it says as much in the function's documentation. So I shouldn't have expected my suggestion of using`mxRefModels()`

and`mxTryHard()`

to work.How were you making your saturated model before my bad advice led you astray? Like Hunter said, if you were using

`mxRefModels()`

, you have to know what you're doing.So far I used mxRefModels on a couple of data sets and didn't experience any problem concerning the chi-square values. I'm using Neale's formally posted UnivariateTwinAnalysis_MatrixRaw-3.R script. How can this problem be solved, what do I have to include in the saturated models for it to give reasonable answers?

Given this is twin modeling, I have lots of questions.

How are you modeling the different relatedness between people?

If it uses a definition variable (e.g. a label of 'data.VariableName'), the out of the box

`mxRefModels`

will probably not be reasonable.Otherwise, are you doing multigroup modeling? In particular, are you using the

`mxFitFunctionMultigroup`

? If so, then a separate means vector and covariance matrix is estimated for each group. Is that reasonable? If you're using the`mxFitFunctionAlgebra`

, the saturated model built by the ref models helper will likely not be reasonable.Have you tried switching optimizers? (The versions of OpenMx distributed by our website are needed for this.)

Have you looked at the model-implied means and covariances for the reference models? Do they look reasonable for your data?

What's the likelihood value for the model of interest? For the saturated model? Independence model?

Same question for degrees of freedom.

Thanks Hunter and RobK for looking into this problem!

I’m running a classical twin study with same-sex MZ and DZ twins. I’m working on a simple univariate ACE model, where Age, Sex and Height are included as covariates, see the model definition script attached.

I’m using the following openMX version

My covariates are included in the MZ and DZ submodels as

Is this 'data.VariableName' definition a problem? If so, how can you add covariates to get correct values for the reference models?

Can this cause negative values for the unstandardized values of the a,c,e components in the model? Is this a problem? Running a model where there are no covariates the unstandardized values are positive.

Yes in the script I’m using mxFitFunctionMultigroup, thus separate mean and covariance matrixes are estimated, is this reasonable for univariate ACE models?

The saturated model output returns the following for the reference model’s and the expected values using SLSQP optimizer

and the following for NPSOL optimizer

The results are very close to each other, so the optimizer isn’t the problem, right?

The -2LL and df values for the saturated, independence and for the models estimate are:

Is there an intention to include the use of definition variables in the future to mxRefModels?

In the meantime, I tried to define the saturated and independence models by hand with the help of scripts found online. If the model definition is correct, then the chi-square values is positive now

though for some reason the df was not recognized, and had to be implemented by hand

What could be the problem?

I had trouble defining the independence model, not being able to constrain the variances across twin order and zygosity to be equal using the mxConstraint command, it reported:

Is there a way to overcome this problem, I somehow couldn't get over it.

Is my univariate saturated model definition with age, sex and height as covariates even correct?

Concerning the error message you're seeing: You have an MxAlgebra named 'ExpMeanMZ' in your MZ submodel. Try removing the "uniTwinSat." from the front of the MxMatrix names in this MxAlgebra's expression, for instance:

Do likewise in your DZ submodel.

Other than that, at a glance, your script appears to be a reasonable way to proceed. You might have an easier time with numerical optimization if you avoid using MxConstraints and impose the equality constraints you want via parameter labels.

Concerning

`mxRefModels()`

with definition variables: I don't think we'll be attempting to make the function "play nice" with definition variables any time soon. In fact, I'm not sure there's a general way that it could. Imagine a case where the covariance matrix is also conditioned upon definition variables (such as in biometric moderation models). What would the saturated model be in such a case? I guess it would be a model where the covariance matrix was conditioned upon all definition variables under consideration, but how would`mxRefModels()`

"read the user's mind" to find out which columns of the dataset were candidates for being used as such?Concerning your question in your other post about negative point estimates: Keep in mind that the parameters labeled 'a', 'c', and 'e' in your script have ambiguous sign in your case. The actual corresponding raw (i.e., unstandardized) variance components would be the squares of those three parameters, so it doesn't matter if they're positive or negative.

I removed the "uniTwinSat." from the mxAlgebra expressions as advised, but now I get the following error message trying to run the model (not trying to constrain any variables yet). I guess the MZ model does not recognizes the mean matrix in the main uniTwinSat model. Is there a way to overcome this?

I read in the manual that it is better to replace MxConstraints with omxSetParameters, but in the above attached script the objects need to be constrained are in mxAlgebra objects. Is there a way to use maybe omxSetParameters somehow in this case? I tried modifying the following script for Definition variables (http://www.vipbg.vcu.edu/HGEN619_2013f/twinSatCon.R), but to implement the definition variables the expected mean structure would have to be in a mxAlgebra object if I'm not mistaken, and then it is basically the same situation.

Thanks for the explanation of the mxRefModels() and the unstandardized variance components, its a lot clearer now!

Ah. Try this--cut and paste these two lines,

into your MZ and DZ submodels, so that your overall MxModel contains only the two submodels and the multigroup fitfunction.

Alternately (and perhaps more easily), instead of deleting the "uniTwinSat.", you could try simply not assigning a different

`name`

to your second MxModel (the one with R symbol 'equateVarsTwinModel'). I think the reason why you were getting an error on that one is that there is nothing in that model named "uniTwinSat".Concerning

`omxSetParameters()`

: I should have been more explicit. You'd need to reparameterize your model to impose equality constraints with parameter labels. For instance, you could parameterize the covariance matrix as a symmetric matrix, instead of as the product of a lower-triangular matrix and its transpose. In that case, you would use the same label for the diagonal elements of the symmetric matrix in both the MZ and DZ submodels. Good start values will be particularly important if you adopt this approach, since the covariance matrix could be non-positive-definite for certain parameter values the optimizer might try.But, maybe only try reparameterizing if you actually run into optimization problems, or if you decide you really need standard errors.

Thanks for the great advise, I think it is finally working! I just have one more question left. For the independence model are such constraints correct?

Do I need to equate the variances to be equal across twin order and zygosity, or do I have to set them to 0? Because the mxRefModels help states: "all the variances and, when possible, all means are estimated. However, covariances are set to zero".

Or should the constrains look like this?