OpenMx 2.0 Discussion

Posted on
Picture of user. AdminRobK Joined: 01/24/2014

Please read before posting: Help us help you!

If you are posting to report an issue with OpenMx 2.0, we ask that your post include, at minimum, the following:

  1. Any relevant error messages, strange output, and so forth, as the case may be.
  2. The version of OpenMx you're running.
  3. The version of R you're running.
  4. Your operating system and architecture (e.g., "x86_64 Linux").
  5. OpenMx's default optimizer (which will always be NPSOL for OpenMx versions 1.4 or older).

Note that you can see items #2 thru #5 above by entering mxVersion() at the R prompt.

Posted on
Picture of user. AdminRobK Joined: 01/24/2014

Issues with third beta release of OpenMx v2.0

The following issues are mentioned in the third beta's announcement:

  • The Windows binaries of this third beta were compiled without enabling multiple threads (i.e., parallel processing with multiple cores).
  • CSOLNP can still crash or freeze up R, or terminate calculation prematurely, when calculating confidence intervals. This behavior appears to be Windows-specific, and almost entirely confined to 64-bit R.
Posted on
No user picture. Veronica_echo Joined: 02/23/2018

A general question about the latent change score model

Hi everyone,

I am working on a longitudinal project recently and our focus is to estimate a change. We are interested in fitting a quadratic latent change score model. Basically, we are using the code attached as the template, which is from Grimm et al., 2016, Chapter 18, Page 467-469. I was wondering if it is possible to estimate the mean and variance of each "dy" and "ly" (which are latent variables) in the model specification using some OpenMx functions directly? Thank you very much!

Posted on
No user picture. Veronica_echo Joined: 02/23/2018

Can mxSE() calculate the standard error for a formula containing definition variables?

Hi everyone,

Hope you are enjoying the winter break!

I am trying to estimate a derived parameter using mxEval() and mxSE(). The function mxEval() works well, but mxSE() reports a warning message as below and produces NA for the standard error.

"1: In mxSE(paste0("instant_rate_est", j), model = model, ... :
Some diagonal elements of the repeated-sampling covariance matrix of the point estimates are less than zero or NA.
I know, right? Set details=TRUE and check the 'Cov' element of this object. "

Posted on
No user picture. mirusem Joined: 11/19/2018

mxSE standard errors for nested models (AE, etc.).

I noticed that the standard errors that can be computed for say, the AE model, return the same standard errors for the A and the E estimate. Is this expected? Just want to be sure. I know SEs in general aren't recommended for CIs, but I am doing this for the sake of a different analysis so just want to be sure that this is what would be expected. I noticed this isn't the case when it's a full model (ACE, etc.).

Posted on
No user picture. mirusem Joined: 11/19/2018

General question about assumptions

So it seems to me that there are two key things one should do if one wants an unbiased use (/estimates) of the ACE model. One is making sure the classical twin design assumptions are met (e.g. variances equal across groups), etc. I am wondering, on top of this, should normality be explicitly tested for as well? I had read that if the assumptions needed in general for a classical twin design (those relating to mean and variance equality) are violated, that then that would then indicate the data is potentially nonnormal.
Posted on
No user picture. mirusem Joined: 11/19/2018

Question about error codes in general

So it looks like for umxConfint() it doesn't report error codes by default; I am wondering, if there were no NaNs, does that necessarily mean there are no errors? I am wondering if I should specifically look at the error codes even if estimates/parameters were returned without an issue. If so, are there specific error codes in which are certainly unacceptable [or are all of them aside from 0?] (and should it be checked for every type of model, even if they ran successfully without any explicit warning in umx)?
Posted on
No user picture. mirusem Joined: 11/19/2018

tryHard output

I was kind of surprised to see (before I used the tryHard option) not having any issues per se, but when I did incorporate it, it looks like sometimes the output of a phenotype would be, for example: Retry limit reached; Best fit=-353.49615 (started at -352.57463) (11 attempt(s): 2 valid, 9 errors)
Does that mean it was a success and it just picks one of the valid solutions? It is certainly different from the output:
Solution found! Final fit=-332.97766 (started at -295.8754) (2 attempt(s): 2 valid, 0 errors).

Posted on
No user picture. EmilieRH Joined: 09/24/2020

Full bivariate moderation model

Hi all,

I am trying to investigate how school performance modifies the genetic and environmental influences on intelligence running a full bivariate moderation model (a la the extended GxE model in Purcell 2002). However, my OpenMx results differ a lot from what my collaborator has obtained using the old Mx software, so I really hope that one of you can help me figure out what I have done wrong in the attached script.

Posted on
No user picture. mirusem Joined: 11/19/2018

Lower bound confidence interval NaN from time to time

Hi,

I was wondering about some lower bound NaN values I've come across (or also when, for example the AE model has C set to 0, but the variance component has a lower bound CI NaN value even though all of them should have a value of 0 (lower bound, estimate, upper bound)). Specifically, I have had NaN values for the A standardized variance component lower bound of the confidence interval from time to time. Does anyone know why this may be? Is there a work-around on how to adjust it? It's for relatively few estimates, but it does come up.