VERY BASIC Question about running ACE model

I am extremely green to openMX, R, and SEM in general. I've perused some of the introductory materials, which have been helpful. The model I'm trying to run is REALLY simple -- hopefully a refreshing change for the users of this message board -- so I've actually spent a bit of time trying to find a script online that I can use by making some simple edits. I have a group of MZ and DZ twins, and I want to run an ACE model to estimate causes of variation for one (just one!) categorical variable. For each pair of twins, the variable is named "twin1" and "twin2", and it can be "1" or "2".
I attempted to use the script that's on this page ("ACE Model Twin Analysis", using my data and my variable names:
http://openmx.psyc.virginia.edu/docs/OpenMx/latest/GeneticEpi_Matrix.html
However, I am getting the following error message when I attempt to run the model:
Running twinACE
Error: The data object 'MZ.data' contains an observed matrix that is not of type 'double'
Could someone tell me what I might be doing wrong and overlooking? I realize this is somewhat vague, but unfortunately, it's where I'm at right now in my understanding...
Thanks!
if you run this in R, what do
names(MZ.data)
summary(MZ.data)
Log in or register to post comments
Hi. I don't know what
is.matrix(myTwinData)
andis.data.frame(myTwinData)
. If is.matrix returns TRUE, then trytypeof(myTwinData)
. It should return the value 'numeric', and I'm guessing that it will return either 'integer' or 'logical' or something else. If it returns an 'integer' value, then the matrix can be coerced into a floating point type by usingmyTwinData <- as.double(myTwinData)
.Log in or register to post comments
Thank you so much for your
I am now faced with another question -- before I ran the model, I used tetrachoric twin correlations to estimate heritability using Falconer's formula, as well as c2 and e2.
Using the script from the website, and specifically this code at the end:
A <- mxEval(A, twinACEFit)
C <- mxEval(C, twinACEFit)
E <- mxEval(E, twinACEFit)
V <- (A+C+E)
a2 <- A/V
c2 <- C/V
e2 <- E/V
I come up with a number for a2 that's very similar to my original estimate. But c2 and e2 are REALLY, really, really different. Before I delve into additional resources attempting to understand why this is the case, could anyone think of any obvious places where I could have made a mistake? I made literally no modifications to the code on the website. Thanks again.
Log in or register to post comments
In reply to Thank you so much for your by wilco92
I'm not sure why the c and e
However, what I would like to note is that I think you have used a script for continuous data analysis but with ordinal data. This may lead to biased parameter estimates, and model comparisons & confidence intervals are very likely incorrect. There is a script for Ordinal twin data analysis in the Boulder Workshop CDROM, which is online at http://ibg.colorado.edu/cdrom2010/rijsdijk/OrdinalTwinAnalysis/
The directory also includes a presentation by Dr. Rijsdijk, and associated scripts. Unfortunately, some changes to the OpenMx syntax for ordinal data since then probably cause the script to fail. Fear not! There is a new version in the SGDP summer school materials here:
http://sgdp.iop.kcl.ac.uk/scmaterials/download.php?id=239
We have Dr. Rijsdijk to thank for these valuable contributions.
Log in or register to post comments
In reply to I'm not sure why the c and e by neale
looks like kcl have their
http://sgdp.iop.kcl.ac.uk/scmaterials/download.php?id=239
Access forbidden!
You don't have permission to access the requested directory. There is either no index document or the directory is read-protected.
If you think this is a server error, please contact the webmaster.
Error 403
sgdp.iop.kcl.ac.uk
Fri Jul 16 11:57:30 2010
Apache/2.2.3 (Linux/SUSE)
Log in or register to post comments
In reply to looks like kcl have their by tbates
Ok, sorry, being at SGDP at
Log in or register to post comments
In reply to Ok, sorry, being at SGDP at by neale
Thanks to all SO much for
In the future, do you have any suggestions for where I can peruse the most updated scripts? And is this forum the best place to check on changes and updates to OpenMX syntax?
Log in or register to post comments
In reply to Ok, sorry, being at SGDP at by neale
Hi Mike, Any chance of
Any chance of posting up the dataset (Castdata) that script was using? Would be useful to have a play with the script with the accompanying data.
Log in or register to post comments
In reply to Hi Mike, Any chance of by garylewis
I think it is probably here:
Log in or register to post comments
In reply to I think it is probably here: by neale
So it is: Thanks!
Log in or register to post comments
Greetings, I'm working on
I'm working on applying the script cited here for another project -- I'm still estimating ACE estimates for a binary variable. When I run it, I'm getting the following error message:
> univACEOrdFit <- mxRun(univACEOrdModel, intervals=T)
Running univACEOrd
Warning messages:
1: In model 'univACEOrd' while computing the lower bound for 'ACE.A[1,1]' NPSOL returned a non-ze
ro status code 6. The model does not satisfy the first-order optimality conditions to the require
d accuracy, and no improved point for the merit function could be found during the final linesear
ch (Mx status RED)
2: In model 'univACEOrd' while computing the lower bound for 'ACE.C[1,1]' NPSOL returned a non-ze
ro status code 1. The final iterate satisfies the optimality conditions to the accuracy requested
, but the sequence of iterates has not yet converged. NPSOL was terminated because no further imp
rovement could be made in the merit function (Mx status GREEN).
3: In model 'univACEOrd' while computing the lower bound for 'ACE.E[1,1]' NPSOL returned a non-ze
ro status code 6. The model does not satisfy the first-order optimality conditions to the require
d accuracy, and no improved point for the merit function could be found during the final linesear
ch (Mx status RED)
4: In model 'univACEOrd' while computing the upper bound for 'ACE.E[1,1]' NPSOL returned a non-ze
ro status code 6. The model does not satisfy the first-order optimality conditions to the require
d accuracy, and no improved point for the merit function could be found during the final linesear
ch (Mx status RED)
5: In model 'univACEOrd' NPSOL returned a non-zero status code 1. The final iterate satisfies the
optimality conditions to the accuracy requested, but the sequence of iterates has not yet conver
ged. NPSOL was terminated because no further improvement could be made in the merit function (Mx
status GREEN).
And then when I do:
univACEOrdSumm
I get:
The final iterate satisfies the optimality conditions to the accuracy requested, but the sequence
of iterates has not yet converged. NPSOL was terminated because no further improvement could be
made in the merit function (Mx status GREEN).
Even though the summary says Mx status GREEN, the fact that I have some interspersed Mx status RED means that I cannot trust the estimates, right? I realize I haven't provided many details and will work on posting my full code up later -- I was just curious if anyone knew off the top of their head where am I likely to have made a mistake? Thanks.
Log in or register to post comments
In reply to Greetings, I'm working on by wilco92
In my experience, almost all
1. Fix the parameter in question at its lower or upper CI bound (whichever you want to check).
2. Leave all other parameters free and re-estimate the model
3. Examine how much worse the fit has become compared to the model with all parameters free. It should be about 3.84 chi-square (-2lnL) units worse. However, it may be less, for example if the parameter is at its lower or upper bound.
In the event that a check on a matrix element that is the result of an algebra (i.e. not a free parameter but is a function of one or more free parameters) then step 1 above needs to be replaced by adding an mxConstraint() that constrains the element in question to its lower or upper CI.
[Note that if the element is a function of only one parameter (say a^2 instead of a) then it is ok to extrapolate from applying the procedure to the parameter itself by applying the inverse of the function of the parameter to find the value to which it should be fixed. For example, suppose the estimate, lower and upper CIs on a^2 are .4, .2 and .7, and we want to verify the .4. One could fix parameter a at sqrt(.4) and see how much the fit has changed from the model in which it is free. This procedure does not work if more than one parameter is involved in the function! The invariance of likelihood-based confidence intervals is a nice property, and one which Std Errors based on the derivatives does not possess.]
Log in or register to post comments