Hello.
I am running an OpenMx under 32-bit Windows (since as I understand it is not possible to do it under 64-bit?). My data file contains appr. 30000 twin pairs, and I am running a bivariate common sex-spesific ACE model with two moderators. It seems like either I have written a very slow script (although parameter estimates are reasonable) or it is just too many parameters to estimate (110 in a saturated model). It was taking so much time that in order to verify that my script actually works, I took only 1000 twin pairs for the analysis. But even then it takes appr. 2.5 h to run a saturated model! Is it normal with this number of parameters or it sounds like a mistake in the script? If former, is there a way to speed up the process?
Thank you beforehand.
Julia
It can take this long, especially with lots and lots of raw data. The easiest speed-up is to use covariance matrices, which will be difficult if you're including moderators via definition variables. You can also turn off the calculation of the Hessian matrix. This is used to generate standard errors at the end of optimization. This is a very time-intensive process, and you can disable it in the mxRun command if all you want is the overall model fit.
Thank you so so much for this suggestion! Now it takes only 10-15 min for the whole dataset:) An improvement is still desired, but I will try it in Linux in order to be able to use more RAM. Thanks again!
The other way to speed things up is parallel processing. I don't know whether this is switched on in your case, or not? This option might speed things up:
To echo Ryne's comment, use mxOption() to disable standard errors and calculated Hessian:
In addition, if you model is taking a long time to run, they I would recommend turning on checkpointing: http://openmx.psyc.virginia.edu/docs/OpenMx/latest/File_Checkpointing.html