# Obtain chi-square fit index of univariate ACE model

15 posts / 0 new
Offline
Joined: 02/19/2015 - 06:55
Obtain chi-square fit index of univariate ACE model

I'm trying to get chi-square fit index of my univariate ACE model. I tried to use mxRefModels(ACEFit) and then use mxCompare between the mxRefModels output and my ACE model (ACEFit), but it returned NA for the p value

base comparison ep minus2LL df AIC diffLL diffdf p
Saturated twinACE 0 NA 0 NA NA NA NA
Saturated twinACE twinACE 6 879.7042 178 523.7042 NA 178 NA
Independence twinACE 0 NA 0 NA NA NA NA
Independence twinACE twinACE 6 879.7042 178 523.7042 NA 178 NA

-Can the mxRegModels funcion be used? Since in the help it states: "One potentially important limitation of the mxRefModels function is for behavioral genetics models."
-How should I modify previously posted (https://openmx.ssri.psu.edu/sites/default/files/UnivariateTwinAnalysis_MatrixRaw-3.R) ACE model definition to get the Saturated and Independence models if mxRefModels can not be used?
-How do I get df values that are similar to ones that I would get in other softwares (Mplus), because 178 seems a bit to much, does this alter the calculation of the chi-square fit index?

I would really appreciate anyone's help!!

Offline
Joined: 07/31/2009 - 15:26
Check the example

Use mxRefModels like the example from the help page for it. Include it as an argument to summary, and everything should come out like you want.

• The mxRefModels() function can be used for BG models. It's just a matter of using the function intelligently and knowing what it does rather than thinking that it magically fixes things. The saturated and independence models generated by mxRefModels() do not respect the nature of BG models. From my recollection of discussions with Mike Neale about the saturated and independence models appropriate for many BG models, the mxRefModels() function should be fine. Just know that it is not making any equality constraints across twins.

• If you wanted, you could use mxRefModels to generate saturated and independence models and then add your own equality constraints across twins. If that's the baseline model you'd like to use, you can do that.

• OpenMx distinguishes between the df of the model and the df of the chi-square. The df of the model for raw data are generally quite large: It's on the order of (number of rows) x (number of non-missing variables in each row). The df of the chi-square are the difference between the saturated model df and the hypothesized model df. This number is much smaller and is what Mplus reports as "the" df. You could notice from the example in the help page for mxRefModels that the df are 2485, but the chi square df are 5.

Offline
Joined: 02/19/2015 - 06:55

Thank you for your thorough response! I really appreciate it, it is now a lot clearer!

Offline
Joined: 02/19/2015 - 06:55
Negative chi-square value

I'm currently getting negative chi-square values for some univariate ACE models and thus the chi-square fit p value is 1.00. I suppose there is some problem, since the fit can't be so good. My AIC and BIC and -2LL values are positive, but my RMSEA values (CI also) are 0. I would really appreciate if anyone has an idea what could be the problem!

Offline
Joined: 01/24/2014 - 12:15
Mike Hunter is probably the

Mike Hunter is probably the best person to answer this, but my suspicion is that numerical optimization of the saturated model is not successful. If you put the already-run saturated model through summary(), you might be able to see if the summary output includes a message about possible convergence problems (e.g., status RED).

You could try using mxRefModels() with run=FALSE. Then, run the saturated model through . The syntax would look something like this:

myRefMods <- mxRefModels(ACEFit,run=FALSE)
myRefModsFit <- vector("list",2)
names(myRefModsFit) <- c("Saturated","Independence")
myRefModsFit$Saturated <- mxTryHard(myRefMods$Saturated)
myRefModsFit$Independence <- mxRun(myRefMods$Independence)

BTW, what does your output from mxVersion() look like?

Offline
Joined: 02/19/2015 - 06:55
Still negative chi-square

mxVersion() returns:
OpenMx version: 2.2.4 [GIT v2.2.4]
R version: R version 3.1.2 (2014-10-31)
Platform: x86_64-w64-mingw32
Default optimiser: SLSQP

Both the Saturated and the Independence model (...$Saturated$output$status and ...$Independence$output$status) convergence code and status are 0, and the summary didn't return any convergence problems
Running the above mentioned script both the Saturated and the Independence models returned: "Solution found", and reports the best starting values. Running the summary command with refModels included the same problems occurs
chi-square: X2 ( df=4 ) = -33.39795, p = 1

Does anyone have an idea, what might be the problem? I checked the estimated a,c,e parameters, could it be that e returned a negative estimate, and that is what is causing the problem? Should I bound the variables a,c,e to be positive, who exactly can this be done?

Summary of twinACE

free parameters:
name  matrix row col      Estimate  Std.Error A
1          a       X   1   1  2.752362e+00 0.21681179
2          c       Y   1   1  8.110164e-06 1.45541514 *
3          e       Z   1   1 -1.460576e+00 0.13609212
4       mean expMean   1   1  2.137937e+01 1.72416767
5    betaAge    beta   1   1  1.835785e-01 0.03033884
6 betaFemale    beta   1   2 -1.866418e+00 0.58843813 *

confidence intervals:
lbound      estimate     ubound note
twinACE.StdEst[1,1]  0.4548895  7.802728e-01  0.8556714
twinACE.StdEst[2,1]         NA  6.774773e-12  0.3142090  !!!
twinACE.StdEst[3,1]  0.1443268  2.197272e-01  0.3372045
a                    2.0446099  2.752362e+00  3.2082476
c                   -1.8114148  8.110164e-06  1.8114148
e                   -1.7707887 -1.460576e+00 -1.2285690
mean                17.9738203  2.137937e+01 24.7878593 
Offline
Joined: 01/24/2014 - 12:15
Saturated model?

Oops, I forgot that mxRefModels() doesn't automatically create reasonable reference models for twin analyses--it says as much in the function's documentation. So I shouldn't have expected my suggestion of using mxRefModels() and mxTryHard() to work.

How were you making your saturated model before my bad advice led you astray? Like Hunter said, if you were using mxRefModels(), you have to know what you're doing.

Offline
Joined: 02/19/2015 - 06:55
Thought mxRefModels was appropriate

So far I used mxRefModels on a couple of data sets and didn't experience any problem concerning the chi-square values. I'm using Neale's formally posted UnivariateTwinAnalysis_MatrixRaw-3.R script. How can this problem be solved, what do I have to include in the saturated models for it to give reasonable answers?

Offline
Joined: 03/01/2013 - 11:03

Given this is twin modeling, I have lots of questions.

How are you modeling the different relatedness between people?

If it uses a definition variable (e.g. a label of 'data.VariableName'), the out of the box mxRefModels will probably not be reasonable.

Otherwise, are you doing multigroup modeling? In particular, are you using the mxFitFunctionMultigroup? If so, then a separate means vector and covariance matrix is estimated for each group. Is that reasonable? If you're using the mxFitFunctionAlgebra, the saturated model built by the ref models helper will likely not be reasonable.

Have you tried switching optimizers? (The versions of OpenMx distributed by our website are needed for this.)

# Switch optimizer
mxOption(NULL, 'Default optimizer', 'NPSOL')

# run any model after here

Have you looked at the model-implied means and covariances for the reference models? Do they look reasonable for your data?

require(OpenMx)
data(demoOneFactor)
manifests <- names(demoOneFactor)
latents <- c("G")
factorModel <- mxModel("One Factor",
type="RAM",
manifestVars=manifests,
latentVars=latents,
mxPath(from=latents, to=manifests),
mxPath(from=manifests, arrows=2),
mxPath(from=latents, arrows=2, free=FALSE, values=1.0),
mxData(observed=cov(demoOneFactor), type="cov", numObs=500))
summary(factorRun <- mxRun(factorModel))
factorSat <- mxRefModels(factorRun, run=TRUE)

cov(demoOneFactor)
mxGetExpected(factorSat[[1]], 'covariance') #saturated
mxGetExpected(factorSat[[2]], 'covariance') #independence
# The above may need to be modified in multigroup models

What's the likelihood value for the model of interest? For the saturated model? Independence model?

# Continued from above
summary(factorSat[[1]])$Minus2LogLikelihood # Saturated summary(factorSat[[2]])$Minus2LogLikelihood # Indep
summary(factorRun)$Minus2LogLikelihood # Interest Same question for degrees of freedom. # Continued from above summary(factorSat[[1]])$degreesOfFreedom # Saturated
summary(factorSat[[2]])$degreesOfFreedom # Indep summary(factorRun)$degreesOfFreedom # Interest
Offline
Joined: 02/19/2015 - 06:55
Definition variables the problem?

Thanks Hunter and RobK for looking into this problem!
I’m running a classical twin study with same-sex MZ and DZ twins. I’m working on a simple univariate ACE model, where Age, Sex and Height are included as covariates, see the model definition script attached.
I’m using the following openMX version

mxVersion()
OpenMx version: 2.2.4 [GIT v2.2.4]
R version: R version 3.1.2 (2014-10-31)
Platform: x86_64-w64-mingw32
Default optimiser: SLSQP

My covariates are included in the MZ and DZ submodels as

mxMatrix( type="Full", nrow=3, ncol=2, free=F, label=c("data.Age_1","data.Female_1", "data.Height_1", "data.Age_2","data.Female_2", "data.Height_2"), name="MZDefVars")

Is this 'data.VariableName' definition a problem? If so, how can you add covariates to get correct values for the reference models?
Can this cause negative values for the unstandardized values of the a,c,e components in the model? Is this a problem? Running a model where there are no covariates the unstandardized values are positive.

#with covariates
lbound      estimate     ubound note
twinACE.StdEst[1,1]  0.6724574  7.876033e-01  0.8610175
twinACE.StdEst[2,1]         NA  2.067154e-13  0.3214136  !!!
twinACE.StdEst[3,1]  0.1389823  2.123967e-01  0.3275563
a                   -3.2379188  2.777182e+00  3.2379191
c                   -1.8434193 -1.422778e-06  1.8434194
e                   -1.7516942 -1.442197e+00 -1.2115100
mean                17.8082353  2.126160e+01 24.7179094

#with no covariates
lbound   estimate     ubound note
twinACE.StdEst[1,1]  0.3550341  0.7305558  0.8957350
twinACE.StdEst[2,1]         NA  0.1139102  0.4793809  !!!
twinACE.StdEst[3,1]  0.1022103  0.1555341  0.2394608
a                    2.1568489  3.0848847  3.7831449
c                   -2.6603002  1.2181296  2.6602953
e                    1.1982434  1.4233936  1.7291788
mean                29.8565230 30.5197256 31.1806316   

Yes in the script I’m using mxFitFunctionMultigroup, thus separate mean and covariance matrixes are estimated, is this reasonable for univariate ACE models?

The saturated model output returns the following for the reference model’s and the expected values using SLSQP optimizer

# SLSQP optimizer
SatModel$Saturated$output$algebras$Saturated MZ.satCov
[,1]     [,2]
[1,] 13.03328 10.51450
[2,] 10.51450 12.10705

$Saturated DZ.satCov [,1] [,2] [1,] 12.539775 6.016162 [2,] 6.016162 13.009494$Saturated twinACE.fitfunction
[,1]
[1,] 984.3139

$Saturated MZ.fitfunction [,1] [1,] 563.5346 attr(,"expCov") [,1] [,2] [1,] 13.03328 10.51450 [2,] 10.51450 12.10705 attr(,"expMean") [,1] [,2] [1,] 30.24921 30.07161$Saturated DZ.fitfunction
[,1]
[1,] 420.7793
attr(,"expCov")
[,1]      [,2]
[1,] 12.539775  6.016162
[2,]  6.016162 13.009494
attr(,"expMean")
[,1]    [,2]
[1,] 30.735 31.1525

and the following for NPSOL optimizer

# NPSOL optimizer
SatModel$Saturated$output$algebras$Saturated MZ.satCov
[,1]     [,2]
[1,] 13.03314 10.51440
[2,] 10.51440 12.10695

$Saturated DZ.satCov [,1] [,2] [1,] 12.539946 6.016196 [2,] 6.016196 13.009622$Saturated twinACE.fitfunction
[,1]
[1,] 984.3139

$Saturated MZ.fitfunction [,1] [1,] 563.5346 attr(,"expCov") [,1] [,2] [1,] 13.03314 10.51440 [2,] 10.51440 12.10695 attr(,"expMean") [,1] [,2] [1,] 30.24923 30.07162$Saturated DZ.fitfunction
[,1]
[1,] 420.7793
attr(,"expCov")
[,1]      [,2]
[1,] 12.539946  6.016196
[2,]  6.016196 13.009622
attr(,"expMean")
[,1]    [,2]
[1,] 30.735 31.1525

The results are very close to each other, so the optimizer isn’t the problem, right?
The -2LL and df values for the saturated, independence and for the models estimate are:

#-2LL, NPSOL optimizer (SLSQP has the same results, is that all right?)
SatModel$Saturated$output$Minus2LogLikelihood [1] 984.3139 SatModel$Independence$output$Minus2LogLikelihood
[1] 1065.65
twinACEFit$output$Minus2LogLikelihood
[1] 947.8753

#df
summary(SatModel$Saturated) observed statistics: 198 estimated parameters: 10 degrees of freedom: 188 summary(SatModel$Independence)
observed statistics:  198
estimated parameters:  8
degrees of freedom:  190
summary(twinACEFit)
observed statistics:  198
estimated parameters:  7
degrees of freedom:  191 
Offline
Joined: 02/19/2015 - 06:55
Intension to include definition variables in the future?

Is there an intention to include the use of definition variables in the future to mxRefModels?

In the meantime, I tried to define the saturated and independence models by hand with the help of scripts found online. If the model definition is correct, then the chi-square values is positive now

summary(twinACEFit, SaturatedLikelihood=uniTwinSatFit)
#result: chi-square:  X2 ( df=NA ) = 3.597958,  p = NA

though for some reason the df was not recognized, and had to be implemented by hand

summary(twinACEFit, SaturatedLikelihood=944.2774 , SaturatedDoF=188 )
#result: chi-square:  X2 ( df=3 ) = 3.597945,  p = 0.3082793

What could be the problem?

I had trouble defining the independence model, not being able to constrain the variances across twin order and zygosity to be equal using the mxConstraint command, it reported:

Error: Unknown reference 'uniTwinSat.expMean' detected in the entity 'ExpMeanMZ' in model 'MZ'

Is there a way to overcome this problem, I somehow couldn't get over it.
Is my univariate saturated model definition with age, sex and height as covariates even correct?

Offline
Joined: 01/24/2014 - 12:15

Concerning the error message you're seeing: You have an MxAlgebra named 'ExpMeanMZ' in your MZ submodel. Try removing the "uniTwinSat." from the front of the MxMatrix names in this MxAlgebra's expression, for instance:

mxAlgebra( expression=expMean + beta %*% MZDefVars, name="ExpMeanMZ"),

Do likewise in your DZ submodel.

Other than that, at a glance, your script appears to be a reasonable way to proceed. You might have an easier time with numerical optimization if you avoid using MxConstraints and impose the equality constraints you want via parameter labels.

Concerning mxRefModels() with definition variables: I don't think we'll be attempting to make the function "play nice" with definition variables any time soon. In fact, I'm not sure there's a general way that it could. Imagine a case where the covariance matrix is also conditioned upon definition variables (such as in biometric moderation models). What would the saturated model be in such a case? I guess it would be a model where the covariance matrix was conditioned upon all definition variables under consideration, but how would mxRefModels() "read the user's mind" to find out which columns of the dataset were candidates for being used as such?

Concerning your question in your other post about negative point estimates: Keep in mind that the parameters labeled 'a', 'c', and 'e' in your script have ambiguous sign in your case. The actual corresponding raw (i.e., unstandardized) variance components would be the squares of those three parameters, so it doesn't matter if they're positive or negative.

Offline
Joined: 02/19/2015 - 06:55
Still some issues

I removed the "uniTwinSat." from the mxAlgebra expressions as advised, but now I get the following error message trying to run the model (not trying to constrain any variables yet). I guess the MZ model does not recognizes the mean matrix in the main uniTwinSat model. Is there a way to overcome this?

Running uniTwinSat
Error: Unknown reference 'expMean' detected in the entity 'ExpMeanMZ' in model 'MZ'

I read in the manual that it is better to replace MxConstraints with omxSetParameters, but in the above attached script the objects need to be constrained are in mxAlgebra objects. Is there a way to use maybe omxSetParameters somehow in this case? I tried modifying the following script for Definition variables (http://www.vipbg.vcu.edu/HGEN619_2013f/twinSatCon.R), but to implement the definition variables the expected mean structure would have to be in a mxAlgebra object if I'm not mistaken, and then it is basically the same situation.

Thanks for the explanation of the mxRefModels() and the unstandardized variance components, its a lot clearer now!

Offline
Joined: 01/24/2014 - 12:15
Ah. Try this--cut and paste

Ah. Try this--cut and paste these two lines,

mxMatrix( type="Full", nrow=1, ncol=2, free=TRUE, values= 30, label="mean", name="expMean" ),
mxMatrix( type="Full", nrow=1, ncol=3, free=TRUE, values= 0.1, label=c("betaAge","betaFemale", "betaHeight"), name="beta"),

into your MZ and DZ submodels, so that your overall MxModel contains only the two submodels and the multigroup fitfunction.

Alternately (and perhaps more easily), instead of deleting the "uniTwinSat.", you could try simply not assigning a different name to your second MxModel (the one with R symbol 'equateVarsTwinModel'). I think the reason why you were getting an error on that one is that there is nothing in that model named "uniTwinSat".

Concerning omxSetParameters(): I should have been more explicit. You'd need to reparameterize your model to impose equality constraints with parameter labels. For instance, you could parameterize the covariance matrix as a symmetric matrix, instead of as the product of a lower-triangular matrix and its transpose. In that case, you would use the same label for the diagonal elements of the symmetric matrix in both the MZ and DZ submodels. Good start values will be particularly important if you adopt this approach, since the covariance matrix could be non-positive-definite for certain parameter values the optimizer might try.

But, maybe only try reparameterizing if you actually run into optimization problems, or if you decide you really need standard errors.

Offline
Joined: 02/19/2015 - 06:55
I think it's working

Thanks for the great advise, I think it is finally working! I just have one more question left. For the independence model are such constraints correct?

IndepModel<- mxModel(SatModel,
mxConstraint( MZ.ExpCovMZ[1,1] == DZ.ExpCovDZ[1,1], name="VarMZ_DZ1"),
mxConstraint( MZ.ExpCovMZ[2,2] == DZ.ExpCovDZ[2,2], name="VarMZ_DZ2"),
mxConstraint( MZ.ExpCovMZ[1,1] == MZ.ExpCovMZ[1,2], name="VarMZ"),
mxConstraint( DZ.ExpCovDZ[1,1] == DZ.ExpCovDZ[1,2], name="VarDZ")
)

Do I need to equate the variances to be equal across twin order and zygosity, or do I have to set them to 0? Because the mxRefModels help states: "all the variances and, when possible, all means are estimated. However, covariances are set to zero".
Or should the constrains look like this?

IndepModel<- mxModel(SatModel,
mxConstraint( MZ.ExpCovMZ[1,2] == 0, name="CoVar_MZ1"),
mxConstraint( MZ.ExpCovMZ[2,1] == 0, name="CoVar_MZ2"),
mxConstraint( DZ.ExpCovDZ[1,2] == 0, name="CoVar_DZ1"),
mxConstraint( DZ.ExpCovDZ[2,1] == 0, name="CoVar_DZ2")
)