Attachment | Size |
---|---|

Scatter plot of the MZ twin data | 16.3 KB |

Scatter plot of the DZ twin data | 11.98 KB |

Script to run ACE and saturated models | 6.99 KB |

Hello,

I'm trying to run an ACE model for some imaging-derived measures and hope you could help me to solve some of the issues.

Initially, I am aiming to run the ACE model and test it against the saturated model to evaluate if fitting a twin model is appropriate for the data. In the ACE model I used sex and age as covariates, however, I'm struggling to incorporate those covariates in the saturated model. So the main question is how to appropriately incorporate covariates in the saturated model? (please find my script attached (data used is as presented in scatter plots - 60 DZ and 117 MZ pairs)).

In addition, when running the ACE model I quite often get a warning message 5 or 6, e.g.

"In model 'twinACE' Optimizer returned a non-zero status code 5. The Hessian at the solution does not appear to be convex. See ?mxCheckIdentification for possible diagnosis (Mx status RED)."

I tried to re-run the model twice by inputting the output of one run to the next run to resolve this issue, but it helped only in a few cases.

Using this data presented in the scatterplot it gives code 5 warning when fitting the saturated model. Is it likely that the data is problematic? or I'm running the model incorrectly?

I would very much appreciate your help.

Best regards,

Aurina

Adding covariates to your saturated model should be pretty straightforward. For instance, try:

This code will estimate 4 regression intercepts (MZ twin #1, MZ twin #2, DZ twin #1, DZ twin #2) and 2 regression slopes (1 each for age and sex). Depending on

how saturatedyou want the model to be, you could estimate as many as 4 intercepts, 4 slopes for age, and 4 slopes for sex.You could try

`mxCheckIdentification()`

, but I don't see any reason to think that there are unidentified parameters. I suggest that you try better start values. You can get good start values for the regression coefficients from using`lm()`

in R, and I suspect that`sqrt((CovMZ[1,1]/3))`

or`sqrt((CovMZ[2,2]/3))`

will be better start values for "a", "c", and "e" than`sqrt((CovDZ[1,2]/3))`

. Also, consider using`mxTryHard()`

in place of`mxRun()`

.Edit: Changed "MZ" to "DZ" in copy-pasted syntax.

Thank you very much for your help and such a quick answer!

I have tried to modify the script according to your suggestions and added the covariates to the saturated model and ran mxCheckIdentification for both saturated and ACE model. In the saturated model there were 6 unidentified parameters ("b0_mz1" "b0_mz2" "betaAge" "betaSex" "b0_dz1" "b0_dz2") while in the ACE there were 2 ("mean" "betaAge") - full function output is attached. I can't figure out why these parameters would be not identified in both models... Any thoughts how to solve this issue?

Also changed the a,c,e start values and tried mxTryHard function, however, the solution still seems not to be convex.

Attached is the modified version of the same code (the one that gives unidentified parameters).

I haven't really worked with twin modeling before, therefore, while trying several different things, some general questions arose. It would really help me if you could point me in the right directions. Probably these questions are really basic, but some clarity would really help.

The idea of my experiment is to estimate the heritability of multiple imaging-derived measures using ACE and it's nested models (AE, CE, AC). In this case, do I first need to run the saturated model on each imaging-derived measure and compare it outputs to ACE model? If so, if I'm using ACE with covariates, should the same covariates be used in the saturated model?

From reading in forums it seems like the correct way test the data is first to run the saturated model and then constrain means (and variances) for twin1/twin2 and zygosity separately and only if constrained models are not different from the original saturated model, continue with ACE. Is this correct? would it be wrong just to compare a saturated model to ACE and if there is no significant difference, start testing nested ACE models?

What if the saturated model is not locally defined? As in my previous post, there are 6 parameters in the saturated model that are not defined. In addition to that, it gives warning 6 ("The model does not satisfy the first-order optimality conditions to the required accuracy, and no improved point for the merit function could be found during the final linesearch (Mx status RED) "). Does this mean that something is wrong with the data (another example of the data is attached as a scatter plot Rplot_data.png) - the output of the model is in saturetedWITHcovariates.png.

In addition, what if the results from model comparisons are significantly different when using mxTryHard() instead of mxRun()? For example, comparing a saturated model with no limitations to one where means are set to be equal? Should the result from mxTryHard() trusted more?

If the saturated model with covariates needs to be used and checked against constrained saturated models (for mean and variance equality), what modifications to the attached script should be made to allow changes in mean and variance definitions when covariates are included (saturatedWITHOUTcovariates.R). At the moment comparisons can be performed without including the covariates. Script named saturatedWITHcovariates.R runs with covariates but does not test for constrained saturated models?

Any help would be really appreciated. Would be great if you could also point me towards some reading (a step-by-step guide with some explanations would be amazing) that could help with those general questions.

Thank you very much!

Best,

Aurina

First of all--two things before I forget:

1. What is your

`mxVersion()`

output?2. What are the measuring units of age in your dataset? If age is measured in years, try dividing it by 100 to convert it to centuries. That's an old trick that can help make the optimization problem more numerically stable. You might also want to switch optimizer, to SLSQP or (if your OpenMx build has it) NPSOL, e.g.

`mxOption(NULL,"Default optimizer","SLSQP")`

You are interested in the variance decomposition of each trait, so yes, be sure that the mean of the trait is conditioned on the same covariates in the models you compare to one another.

I consider the primary purpose of fitting the saturated and cross-constrained models to be detection of data-handling errors, and non-error quirks in the dataset (such as outliers). I recommend you fit all of the models that you think are

a prioriplausible, and compare their AICs. Using "stepwise" likelihood-ratio tests for model selection is not advised. If AIC favors the saturated or cross-constrained models over the ACE model for a trait, you'll need to explain why that might be in the ensuing paper or presentation; your subject-matter knowledge about the trait might help you with that. In that case, you could still report a heritability estimate from the biometric model with the smallest AIC.It could simply be that the model appears locally unidentified at the start values. Try changing start values of zero to something better. Also, my suggestions to re-scale the age variable, use better start values, and use

`mxTryHard()`

are meant to help with your optimization difficulties (status code 6). Beyond that, I can't really be sure. I'll look again at your script to see if I can spot any possible source of unidentification.Yes.

`mxTryHard()`

makes multiple attempts to optimize the model, and randomly perturbs the start values between attempts.`mxRun()`

(which is called by`mxTryHard()`

) only makes one attempt. Therefore,`mxTryHard()`

should give better results.Thank you so much for your help!

mxVersion output is 2.8.3 (full output is attached in the mxVersion.png).

This version doesn't have NPSOL optimizer. Is it worth updating the version to use it?

age was in years, I've changed it to centuries - thanks!

You were saying "...I recommend you

fit all of the models that you think are a priori plausible, and compare their AICs. Using "stepwise" likelihood-ratio tests for model selection is not advised. If AIC favors the saturated or cross-constrained models over the ACE model for a trait, you'll need to explain why that might be in the ensuing paper or presentation; your subject-matter knowledge about the trait might help you with that. In that case, you could still report a heritability estimate from the biometric model with the smallest AIC".Please correct me if I'm wrong, but does that mean that in my case I would need to run:

Saturated model without constraints(if the assumption is that data doesn't have outliers and is acquired from the same population - in my study all are young adults, there are no expected differences in variance or mean of the measure based on the population)and

ACEmodel + nested models.And instead of comparing them using mxCompare and evaluating p values, I would need to directly look at AIC values and choose the one with the smallest value? It seems like the saturated model is by definition the one with the lowest AIC as we don't put any constraints on it, so it can fit the data best. How should be AIC values interpreted in this case?

In regards to the constrained saturated models, do I still need to test them and consider their AIC values when choosing the best model to fit the data if there is no reason to expect mean/variance differences in the data and I'm interested in the ACE model fit? If they still need to be evaluated, is there a way to do that using Cholesky parametrisation (I noticed that changing the parametrisation ended up giving warning 6 when Cholesky parametrisation on the saturated model was settling on the solution). An also, how should AIC values then should be compared/interpreted if all those models are considered: saturated(unconstrained), satudated(constrained), ACE, AE, AC, EA.

In addition to all those questions, I've noticed that when running the ACE model (where standardized additive genetic variance = 0.44, unique environmental variance = 0.53, shared environmental variance = almost zero) and its submodels AE and CE models were running fine using mxTryHard. However, AC model is failing to run and gives an an error ("All fit attempts resulted in errors - check starting values or model specification"). Does it fail because the value for the unique environmental variance is the highest (0.53) and excluding its contribution the model cannot be defined? Considering that additive genetic variance also had a high value of 0.44, shouldn't it also happen in the CE model then? The current version of the script is attached (ACE_wcovariates_Saturated_nested.R). Is this order of steps in the script correct?

Thank you very much for your help - I really appreciate it!

Best,

Aurina

It's a good idea to update to the newest release, which you can get either from CRAN or from our package repository. You have to install from our package repository if you want NPSOL, though.

Did it make a difference?

My main point here is that likelihood-ratio testing is a poor way to do model-selection, since it only compares two models at a time: a model and a nested submodel. It's possible for, say, a test of an ACE model against a CE model to imply conclusions that differ from a test of an AE model against an E model, and I know from experience that tends to confuse novices. Using the LRT for model-selection is even more objectionable if there is no correction for multiple comparisons, and if the set of models under consideration is dependent upon the results of previous comparisons (in a "stepwise" manner). In contrast, AIC can be used to simultaneously compare any number of models to one another, and they need not be a sequence of nested submodels.

No. AIC = -2logL + 2k, where k is the number of free parameters. Simplistically, AIC is a "penalized fit index," that penalizes model fit to the data (-2logL) proportionally to how many parameters the model needs to estimate from the data. At a deeper level, AIC is an approximation to model-selection via cross-validation. I refer you to a relevant publication of mine, which discusses the theory underlying AIC, and explains my perspective on model-selection. BTW, in the next release of OpenMx, there will be functions to automate model-averaging and multimodel inference.

Like I said, the saturated and cross-constrained models are often mainly of interest to detect data-handling errors and legitimate data quirks. They're often not of substantive interest themselves, and it sounds like that's the case for you. Personally, I don't consider them necessary, but I don't see any harm in fitting them, and a lot of people do fit them as a matter of routine. Reviewers of your manuscript may want to know how well the ACE model fits in comparison to the saturated model, though, even if you end up reporting results from one of the biometrical models.

Yes. It will require an MxConstraint, though.

The AC model is a complete non-starter in biometrical variance-components analysis via maximum-likelihood. The AC model implies that the MZ covariance equals the phenotypic variance. That makes the model-expected MZ covariance matrix singular, i.e. not a valid covariance matrix, at every value of the free parameters.

Maybe look around in the materials from the Boulder Workshop from earlier this year? There's also the OpenMx User Guide.

Anyhow, I don't see that any of your saturated model's parameters are unidentified. It's possible that some of them could be

empiricallyunidentified due to limitations of your dataset, but I rather doubt that--it's not a complicated model, and doesn't have that many free parameters. I think`mxCheckIdentification()`

's results are spurious, and probably due to the start values.Try this (note that it uses a different parameterization of the covariance matrices):