You are here

Error using definition variables

6 posts / 0 new
Last post
pgseye's picture
Offline
Joined: 10/13/2009 - 23:50
Error using definition variables

Hi,

I'm wanting to add 3 covariates to a saturated model. One of them (age) seems to work fine, but when I try to use either of the other two individually (birthweight or gestation time), I get an error message:

> univTwinSatFit <- mxRun(univTwinSatModel)
Running univTwinSat
Error in mxRun(univTwinSatModel) :
Error NYI: Missing Definition Vars Not Yet Implemented.

I have full data for age, but a lot of missing data with the other two variables. I just want to confirm what the error message says - does OpenMx choke if your definition variable contains NA's? Will this be resolved anytime soon?

The definition variable script format is as follows:

univTwinSatModel <- mxModel("univTwinSat",
mxModel("MZ",
mxMatrix( type="Lower", nrow=ntv, ncol=ntv, free=TRUE, values=1, name="CholMZ" ),
mxAlgebra( expression=CholMZ %*% t(CholMZ), name="expCovMZ" ),
mxData( observed=mzData, type="raw" ),
# Adjust for age
mxMatrix( type="Full", nrow=1, ncol=nv, free=TRUE, values=0, name="Mean" ),
mxMatrix( type="Full", nrow=1, ncol=nv, free=TRUE, values=0, labels=c("beta1","beta2"), name="b" ),
mxMatrix( type="Full", nrow=1, ncol=1, free=FALSE, labels=c("data.Age_1"), name="A1"), #twin1
mxMatrix( type="Full", nrow=1, ncol=1, free=FALSE, labels=c("data.Age_2"), name="A2"), #twin2
mxAlgebra( expression= A1 %x% MZ.b, name="A1R"),
mxAlgebra( expression= A2 %x% MZ.b, name="A2R"),
mxAlgebra( expression= cbind((MZ.Mean + A1R),(MZ.Mean + A2R)), name="expMeanMZ"),
mxFIMLObjective( covariance="expCovMZ", means="expMeanMZ", dimnames=selVars),

...

Thanks,

Paul

tbrick's picture
Offline
Joined: 07/31/2009 - 15:10
OpenMx does not currently

OpenMx does not currently handle missing definition variables.

This is because definition variables can be included in arbitrary algebraic expressions, so it's not clear how this should be handled in the general case.

In developer meetings we've discussed several options for allowing people to handle missing definition variables, but have not decided on the best way to approach the issue. We'd welcome feedback from the community on this point. Handling missing definition variables was not identified as one of the features for the 1.0 release, so there is currently no official schedule for adding it.

In your specific case, however, you should be able to work around it.

If you include your 3 covariates as variables in the covariance algebra, FIML will appropriately handle their missingness. To keep the model the same, you'll want to allow these new variables to freely covary with each other and the other manifest variables in your model so that they don't contribute to misfit.

Please note that because you'll be including more variables in the covariance algebra, your fit statistics will be different, and with many fit statistics it will not be meaningful to compare the fit of the new model (with covariates included) to the fit of the old model (with covariates excluded).

pgseye's picture
Offline
Joined: 10/13/2009 - 23:50
Thanks Tim

Thanks Tim

neale's picture
Offline
Joined: 07/31/2009 - 15:14
Definition variables are

Definition variables are essential when, e.g., moderating a path between variables. However, if they are being used as simple covariates, i.e., just regressing something out of the mean, then it may be better to include them in the model as explicit variables. They should have free parameters for their mean and variance, be allowed to covary with each other freely, and have causal paths to the observed variables that you want them to be regressed out of. (Sorry for ending that sentence with two prepositions!)

I believe that this formulation is equivalent; it has the disadvantage of requiring more free parameters (the means, variances and covariances of the covariates) but the advantage of working when the definition variables are missing. Someone should check this out, though.

pgseye's picture
Offline
Joined: 10/13/2009 - 23:50
Thanks Mike, You've mentioned

Thanks Mike,

You've mentioned the same work around as Tim, but I'm not sure I actually understand how to do this. So if I've got one variable of interest, but 3 covariates - would I effectively do a (multivariate) cholesky decomposition of all 4? (and then apply constraints on the variable of interest as I've already done). This approach theoretically has the same adjustment effect for the covariates on the main variable?

Is this what you mean?

Thanks,

Paul

Edit - I forgot to ask about Tim's point. Even though the fit statistics will change - can you not compare the models based on resulting p values?

neale's picture
Offline
Joined: 07/31/2009 - 15:14
Personally I would not use

Personally I would not use Cholesky. Instead, set up some additional matrices for the covariates (x variables):
CovX symmetric nx nx
RegX full ny nx
MeanX full nx 1
(but in OpenMx format of course) then make your new covariance structure like this:

cbind(rbind(CovX,CovX%%t(RegX)),
RegX%
%CovX, OriginalCovY + RegX%%CovX%%t(RegX))

I think you are then ok testing submodels with and without free parameters in RegX, to test for the effects of covariates. The same data should still be there so the models should be comparable.

However, if you have the X variables as the first few variables in the Cholesky, that would work too, I think.