You are here

VERY BASIC Question about running ACE model

13 posts / 0 new
Last post
wilco92's picture
Offline
Joined: 07/15/2010 - 01:46
VERY BASIC Question about running ACE model

Hi there,

I am extremely green to openMX, R, and SEM in general. I've perused some of the introductory materials, which have been helpful. The model I'm trying to run is REALLY simple -- hopefully a refreshing change for the users of this message board -- so I've actually spent a bit of time trying to find a script online that I can use by making some simple edits. I have a group of MZ and DZ twins, and I want to run an ACE model to estimate causes of variation for one (just one!) categorical variable. For each pair of twins, the variable is named "twin1" and "twin2", and it can be "1" or "2".

I attempted to use the script that's on this page ("ACE Model Twin Analysis", using my data and my variable names:

https://openmx.ssri.psu.edu/docs/OpenMx/latest/GeneticEpi_Matrix.html

However, I am getting the following error message when I attempt to run the model:
Running twinACE
Error: The data object 'MZ.data' contains an observed matrix that is not of type 'double'

Could someone tell me what I might be doing wrong and overlooking? I realize this is somewhat vague, but unfortunately, it's where I'm at right now in my understanding...

Thanks!

tbates's picture
Offline
Joined: 07/31/2009 - 14:25
if you run this in R, what do

if you run this in R, what do you see?

names(MZ.data)
summary(MZ.data)

mspiegel's picture
Offline
Joined: 07/31/2009 - 15:24
Hi. I don't know what

Hi. I don't know what variable name you used for the data set in your script. But in the "ACE Model Twin Analysis" script, the variable name is 'myTwinData,' so I'm going to pretend that's the name you selected. Try the commands: is.matrix(myTwinData) and is.data.frame(myTwinData). If is.matrix returns TRUE, then try typeof(myTwinData). It should return the value 'numeric', and I'm guessing that it will return either 'integer' or 'logical' or something else. If it returns an 'integer' value, then the matrix can be coerced into a floating point type by using myTwinData <- as.double(myTwinData).

wilco92's picture
Offline
Joined: 07/15/2010 - 01:46
Thank you so much for your

Thank you so much for your prompt replies! I was able to convert my matrices to the numeric format and the model ran just fine. This forum is an excellent resource!

I am now faced with another question -- before I ran the model, I used tetrachoric twin correlations to estimate heritability using Falconer's formula, as well as c2 and e2.

Using the script from the website, and specifically this code at the end:

A <- mxEval(A, twinACEFit)
C <- mxEval(C, twinACEFit)
E <- mxEval(E, twinACEFit)
V <- (A+C+E)
a2 <- A/V
c2 <- C/V
e2 <- E/V

I come up with a number for a2 that's very similar to my original estimate. But c2 and e2 are REALLY, really, really different. Before I delve into additional resources attempting to understand why this is the case, could anyone think of any obvious places where I could have made a mistake? I made literally no modifications to the code on the website. Thanks again.

neale's picture
Offline
Joined: 07/31/2009 - 15:14
I'm not sure why the c and e

I'm not sure why the c and e estimates differ, but there are several possibilities. Falconer's formula estimates may deviate from maximum likelihood estimates when the variances differ between MZ's and DZ's, for example.

However, what I would like to note is that I think you have used a script for continuous data analysis but with ordinal data. This may lead to biased parameter estimates, and model comparisons & confidence intervals are very likely incorrect. There is a script for Ordinal twin data analysis in the Boulder Workshop CDROM, which is online at http://ibg.colorado.edu/cdrom2010/rijsdijk/OrdinalTwinAnalysis/
The directory also includes a presentation by Dr. Rijsdijk, and associated scripts. Unfortunately, some changes to the OpenMx syntax for ordinal data since then probably cause the script to fail. Fear not! There is a new version in the SGDP summer school materials here:

http://sgdp.iop.kcl.ac.uk/scmaterials/download.php?id=239

We have Dr. Rijsdijk to thank for these valuable contributions.

tbates's picture
Offline
Joined: 07/31/2009 - 14:25
looks like kcl have their

looks like kcl have their material protected or the url is wrong?

http://sgdp.iop.kcl.ac.uk/scmaterials/download.php?id=239

Access forbidden!

You don't have permission to access the requested directory. There is either no index document or the directory is read-protected.

If you think this is a server error, please contact the webmaster.

Error 403

sgdp.iop.kcl.ac.uk
Fri Jul 16 11:57:30 2010
Apache/2.2.3 (Linux/SUSE)

neale's picture
Offline
Joined: 07/31/2009 - 15:14
Ok, sorry, being at SGDP at

Ok, sorry, being at SGDP at the moment I was not aware. Here's the script.

wilco92's picture
Offline
Joined: 07/15/2010 - 01:46
Thanks to all SO much for

Thanks to all SO much for your assistance! I believe I'm on track now.

In the future, do you have any suggestions for where I can peruse the most updated scripts? And is this forum the best place to check on changes and updates to OpenMX syntax?

garylewis's picture
Offline
Joined: 05/05/2010 - 05:49
Hi Mike, Any chance of

Hi Mike,

Any chance of posting up the dataset (Castdata) that script was using? Would be useful to have a play with the script with the accompanying data.

neale's picture
Offline
Joined: 07/31/2009 - 15:14
I think it is probably here:

I think it is probably here: http://ibg.colorado.edu/cdrom2010/rijsdijk/OrdinalTwinAnalysis/Binary/ - I don't have access to the IoP directories, so I can't get a hold of it directly, and it's dinner time on that side of the pond...

garylewis's picture
Offline
Joined: 05/05/2010 - 05:49
So it is: Thanks!

So it is: Thanks!

wilco92's picture
Offline
Joined: 07/15/2010 - 01:46
Greetings, I'm working on

Greetings,

I'm working on applying the script cited here for another project -- I'm still estimating ACE estimates for a binary variable. When I run it, I'm getting the following error message:

> univACEOrdFit <- mxRun(univACEOrdModel, intervals=T)
Running univACEOrd
Warning messages:
1: In model 'univACEOrd' while computing the lower bound for 'ACE.A[1,1]' NPSOL returned a non-ze
ro status code 6. The model does not satisfy the first-order optimality conditions to the require
d accuracy, and no improved point for the merit function could be found during the final linesear
ch (Mx status RED)
2: In model 'univACEOrd' while computing the lower bound for 'ACE.C[1,1]' NPSOL returned a non-ze
ro status code 1. The final iterate satisfies the optimality conditions to the accuracy requested
, but the sequence of iterates has not yet converged. NPSOL was terminated because no further imp
rovement could be made in the merit function (Mx status GREEN).
3: In model 'univACEOrd' while computing the lower bound for 'ACE.E[1,1]' NPSOL returned a non-ze
ro status code 6. The model does not satisfy the first-order optimality conditions to the require
d accuracy, and no improved point for the merit function could be found during the final linesear
ch (Mx status RED)
4: In model 'univACEOrd' while computing the upper bound for 'ACE.E[1,1]' NPSOL returned a non-ze
ro status code 6. The model does not satisfy the first-order optimality conditions to the require
d accuracy, and no improved point for the merit function could be found during the final linesear
ch (Mx status RED)
5: In model 'univACEOrd' NPSOL returned a non-zero status code 1. The final iterate satisfies the
optimality conditions to the accuracy requested, but the sequence of iterates has not yet conver
ged. NPSOL was terminated because no further improvement could be made in the merit function (Mx
status GREEN).

And then when I do:
univACEOrdSumm

I get:

The final iterate satisfies the optimality conditions to the accuracy requested, but the sequence
of iterates has not yet converged. NPSOL was terminated because no further improvement could be
made in the merit function (Mx status GREEN).

Even though the summary says Mx status GREEN, the fact that I have some interspersed Mx status RED means that I cannot trust the estimates, right? I realize I haven't provided many details and will work on posting my full code up later -- I was just curious if anyone knew off the top of their head where am I likely to have made a mistake? Thanks.

neale's picture
Offline
Joined: 07/31/2009 - 15:14
In my experience, almost all

In my experience, almost all code Greens (I would say >99%) result in estimates that are correct (i.e., the solution is at the global minimum). With code Red it is less certain, but still over 90% are false alarms and might be verified. In your case, the code REDs are associated with estimating the confidence intervals on certain parameters. One way to check CI's on parameters is as follows:

  1. Fix the parameter in question at its lower or upper CI bound (whichever you want to check).
  2. Leave all other parameters free and re-estimate the model
  3. Examine how much worse the fit has become compared to the model with all parameters free. It should be about 3.84 chi-square (-2lnL) units worse. However, it may be less, for example if the parameter is at its lower or upper bound.

In the event that a check on a matrix element that is the result of an algebra (i.e. not a free parameter but is a function of one or more free parameters) then step 1 above needs to be replaced by adding an mxConstraint() that constrains the element in question to its lower or upper CI.

[Note that if the element is a function of only one parameter (say a^2 instead of a) then it is ok to extrapolate from applying the procedure to the parameter itself by applying the inverse of the function of the parameter to find the value to which it should be fixed. For example, suppose the estimate, lower and upper CIs on a^2 are .4, .2 and .7, and we want to verify the .4. One could fix parameter a at sqrt(.4) and see how much the fit has changed from the model in which it is free. This procedure does not work if more than one parameter is involved in the function! The invariance of likelihood-based confidence intervals is a nice property, and one which Std Errors based on the derivatives does not possess.]