In a ACE bivariate Cholesky, any ideas why confidence intervals for a shared environmental correlation would be returned as (-1.00, 1.00)? My syntax is included below. The model converged without any errors.

bivACEModel <- mxModel("bivACE", mxModel("ACE", # Matrices a, c, and e to store a, c, and e path coefficients mxMatrix( type="Lower", nrow=nv, ncol=nv, free=TRUE, values=.8, labels=c("a11","a21","a22"), name="a" ), mxMatrix( type="Lower", nrow=nv, ncol=nv, free=TRUE, values=.8, labels=c("c11","c21","c22"),name="c" ), mxMatrix( type="Lower", nrow=nv, ncol=nv, free=TRUE, values=.8, labels=c("e11","e21","e22"),name="e" ), # Matrices A, C, and E compute variance components mxAlgebra( expression=a %*% t(a), name="A" ), mxAlgebra( expression=c %*% t(c), name="C" ), mxAlgebra( expression=e %*% t(e), name="E" ), # Algebra to compute total variances and standard deviations (diagonal only) mxAlgebra( expression=A+C+E, name="V" ), mxMatrix( type="Iden", nrow=nv, ncol=nv, name="I"), mxAlgebra( expression=solve(sqrt(I*V)), name="iSD"), mxAlgebra(iSD %*% a, name="sta"), mxAlgebra(iSD %*% c, name="stc"), mxAlgebra(iSD %*% e, name="ste"), mxAlgebra(A/V, name="StandardizedA"), mxAlgebra(C/V, name="StandardizedC"), mxAlgebra(E/V, name="StandardizedE"), mxAlgebra(solve(sqrt(I*A)) %&% A, name="CorA"), mxAlgebra(solve(sqrt(I*C)) %&% C, name="CorC"), mxAlgebra(solve(sqrt(I*E)) %&% E, name="CorE"), mxAlgebra(solve(sqrt(I*V)) %&% V, name="CorP"), ## Note that the rest of the mxModel statements do not change for bivariate/multivariate case # Matrix & Algebra for expected means vector mxMatrix( type="Full", nrow=1, ncol=nv, free=TRUE, values= 20, name="Mean" ), mxAlgebra( expression= cbind(Mean,Mean), name="expMean"), # Algebra for expected variance/covariance matrix in MZ mxAlgebra( expression= rbind ( cbind(A+C+E , A+C), cbind(A+C , A+C+E)), name="expCovMZ" ), # Algebra for expected variance/covariance matrix in DZ, note use of 0.5, converted to 1*1 matrix mxAlgebra( expression= rbind ( cbind(A+C+E , 0.5%x%A+C), cbind(0.5%x%A+C , A+C+E)), name="expCovDZ" ) ), mxModel("MZ", mxData( observed=mzData, type="raw" ), mxFIMLObjective( covariance="ACE.expCovMZ", means="ACE.expMean", dimnames=selVars ) ), mxModel("DZ", mxData( observed=dzData, type="raw" ), mxFIMLObjective( covariance="ACE.expCovDZ", means="ACE.expMean", dimnames=selVars ) ), mxAlgebra( expression=MZ.objective + DZ.objective, name="minus2sumloglikelihood" ), mxAlgebraObjective("minus2sumloglikelihood"), mxCI(c('ACE.sta', 'ACE.stc', 'ACE.ste')), mxCI(c('ACE.StandardizedA', 'ACE.StandardizedC', 'ACE.StandardizedE')), mxCI(c('ACE.CorA', 'ACE.CorC', 'ACE.CorE', 'ACE.CorP')) ) bivACEFit <- mxRun(bivACEModel) bivACESumm <- summary(bivACEFit) bivACESumm

Please report the output of mxVersion()

So, I discovered that it was an old version of MX -- 1.4-3060-- running on an old version of R -- 3.00. Here are the confidence intervals for the syntax I ran above:

Note the confidence intervals I'm confused about:

I updated both Open Mx and R and got this:

However, the confidence intervals are still strange for the shared environmental correlation.

Thanks for your help.

You still haven't reported mxVersion() so we know which version you are using.

Version 2.2.4 has new diagnostics for investigation of profile CIs. I believe you can access them with model$compute$steps$CI$output$detail

Oops -- the first set of results I list above come from "1.4-3060". But I downloaded the latest OpenMX version for the second set of results that I include above, so I will try running this model$compute$steps$CI$output$detail when I get home tonight. Thank you for your help!

Hm, on second thought, that syntax won't work for OpenMx 2.2.4. You'll need something like model$compute$steps[[2]]$output$detail

See if you can find the output slot of the mxComputeConfidenceInterval object.

BTW, what you'll see from

`model$compute$steps[[2]]$output$detail`

is a data table. Each row corresponds to an attempt to find a confidence limit. The first column is the name of the matrix element for which the confidence limit is being calculated (e.g.,`ACE.StandardizedC[2,2]`

). The second column tells you the value of the limit that was found. The third column tells you whether it's a lower limit (1) or an upper limit (0). The fourth tells you the value of the fitfunction at the limit. The rest of the columns tell you the values of the other parameters at the limit.Hi all. I really appreciated the helpful comments on this thread and was satisfied that the very small C for the second variable explained this issue. However, I am continuing to have some issues with confidence intervals now that I have transitioned to OpenMX 2.2.4. I'm doing a similar analysis as listed above, only it's a scalar model, so raw phenotypic variance is allowed to vary across males and females.

When I run this model (it converges, I get the OpenMX 2.2.4 message about changing mxFIMLObjective), this is what I get:

There seem to be a number of irregularities here. For example, look at the CIs around these selected path estimates.

And then, for the genetic/shared environment correlations, also some strange results:

Unfortunately, when I use the code

model$compute$steps[[2]]$output$detail

I get "NULL." Any insights would be quite valuable for me -- I want to be able to report if the standardized paths are significant and have confidence intervals for the genetic and environmental correlations.

Really? What do you see from

`str(bivACEFit$compute$steps)`

?There's nothing wrong here that jumps out at me...

Since elements [1,2] and [2,1] of 'CorCm' are supposed to be equal due to symmetry (right?), I think you can trust the CI provided for element [1,2]. The CI for [2,1] of 'CorAm' is a problem, though. There are a couple things you could try. For one, if you installed version 2.2.4 from virginia.edu (i.e., not from CRAN), you could run the line

and then re-run your script. Alternately, if you only want to know if element [2,1] of 'CorAm' differs significantly from zero, you could make a new MxModel just like your old one, except with an MxConstraint that fixes the element to zero. Then, use

`mxCompare`

with your new and old model. Finally, if you really need to find the lower confidence limit, you could try running new MxModels that fix the element to various plausible values for the lower limit (again via an MxConstraint). This is rather labor-intensive, but you will have found an approximate lower limit if you find a candidate value that worsens the model's fit by about 3.84.Thank you so very much for all of these excellent ideas. I believe I did install from CRAN, but I will try installing from virginia.edu and using the mxOption, and then I'll report back

With the first set of CIs that I pasted above, even though there were no errors, I was surprised to see that for these paths, the upper and lower limits have

almostexactly the same absolute values. This just seemed very curious to me!Also, I made a silly mistake when I was using model$compute$steps[[2]]$output$detail, and now I've got the output that you had previously described in an earlier post. What I don't understand is why the upper and lower limits are different than what appears in my model summary. For example, looking at the correlations, this is what appears from model$compute$steps[[2]]$output$detail (leaving out the values of other parameters at the limit for the sake o legibility):

Instructions for installing from the virginia.edu repository are here.

Your MxAlgebra 'stam' is just the Cholesky factor 'am' with its diagonal elements divided by standard deviations. The signs of some of the parameters in the Cholesky factors are indeterminate. For instance, consider the [2,2] element of 'am', which you've labeled 'am22'. It only appears in the 2x2 variance component 'Am' as squared; specifically, the [2,2] element of 'Am' equals am21^2 + am22^2. The CIs where the upper and lower limit have approximately the same absolute magnitude should be correct, given the parameterization of this model. If they still make you uneasy, you could experiment with setting lower bounds of zero on some parameters, or just use a different parameterization.

Notice that the lower limits in the 'detail' output you highlighted are actually greater than the point estimate. I'm not completely sure, but I suspect that

`summary()`

doesn't report confidence limits that are so obviously wrong, and instead just uses the point estimate as the value for the problematic confidence limit, and flags it with "!!!" to alert the user that something is amiss.From your output, the estimate of C for the second variable is very small, and not significantly different from zero. Therefore there is very little information to estimate the C correlation with other variables, so I expect the result that you got is the right answer: rC could be fixed to any value between -1 and 1 without significantly worsening the fit.

This brings up the point that estimates of correlations between latent factors are not distributed the same as correlations between variables. Their precision depends on estimates of other parameters in the model. It is why two-step analyses (estimate say a genetic correlation matrix and then plug that into factor analysis to explore its structure) can only be exploratory. Standard tests of, e.g., the number of factors etc. would be misleading. The single-step approach, which you are using, is the right way to go to test such hypotheses.

Ah -- this makes sense. Thank you for this helpful answer. In terms of reporting these results in a coherent way, then, I could include the (-1, 1) confidence interval for the shared environmental correlation and then demonstrate that the model fits as well by eliminating the c21 path?