Bootstrap CI's for algebras

Posted on
Picture of user. Julia Joined: 03/29/2012
Hi.

Is it possible to get bootstrap confidence intervals for algebras? I am running a trivariate CHolesky decomposition model on binary variables and estimating CI's through mxCI takes more than 24h. When I ran mxBoostrap, I got CI's only for model parameters, but not for algebras. Is there a way to specify it? Or is there a way to reduce the time to running a model with intervals=T?

Thank you in advance!
Julia

Replied on Tue, 05/08/2018 - 06:46
Picture of user. AdminNeale Joined: 03/01/2013

Hi Julia

I don't think there's anything built-in, but in principle you could take the bootstrap estimates, one line (i.e. one set of parameter estimates) at a time, and use omxSetParameters to update the model, then use mxEval to obtain the algebras of interest.

However, it might be easier not to use the Cholesky and to simply estimate A C and E matrices as symmetric matrices. This might give you quantities closer to what interests you (and avoid some statistical issues with the Cholesky).

Your likelihood-based CIs seem extra slow - perhaps ordinal data in an ordinal model. See also mxSE() which can now produce standard errors for algebras. Much faster and not too different from likelihood-based CIs if the asymptotic behavior has been attained.

Cheers
Mike

Replied on Tue, 05/08/2018 - 09:32
Picture of user. Julia Joined: 03/29/2012

In reply to by AdminNeale

Thank you, Mike, for a prompt reply! mxSE() are not working with my model as I have mxConstraint in the script to constrain the variance if the binary variable to 1. Estimating A,C and E matrices instead sounds like a good idea. Although I have a question about setting lower boundaries for diagonal elements which are variances. Nowhere in the Boulder scripts I found that those were constrained to be non-negative. Is there a reason for that?
When I let them be estimated without boundaries, I get negative variances. When I constrain only diagonal elements to be greater than or equal to zero, I get negative covariances which was not the case when I ran the classical Cholesky script. At the same time I don't feel comfortable with forcing covariances to be non-negative. Could you please advise?
Replied on Wed, 05/09/2018 - 14:08
Picture of user. AdminRobK Joined: 01/24/2014

In reply to by Julia

Although I have a question about setting lower boundaries for diagonal elements which are variances. Nowhere in the Boulder scripts I found that those were constrained to be non-negative. Is there a reason for that?

Several reasons, in fact:

  • Constraining variance components to be non-negative isn't theoretically necessary. The parameter space for the covariance matrix of (nondegenerate) multivariate normal distributions is the set of square symmetric positive-definite matrices. The A, C, E covariance matrices don't have to be PD; only the phenotypic covariance matrix does.
  • Constraining variance components to be non-negative isn't practically necessary for optimization purposes, either. It is true that if the component A, C, E covariance matrices are all PD, the phenotypic covariance matrix will necessarily be also. In the past, using that fact to ensure a PD phenotypic covariance matrix at all times may have been necessary for optimization. But, it isn't anymore.
    All of OpenMx's optimizers with which I'm familiar (the 3 main ones and 2 niche ones) are all able to recover if they step outside the parameter space.

  • If the true value of a variance component is zero, but its point estimates are constrained to be non-negative, then those points estimates cannot possibly be unbiased.
  • The main reason: when one or more parameter estimates is on or near an "artificial" bound, likelihood-ratio test statistics concerning those parameters will not have their theoretical chi-square null distribution. Since MxCIs invert the LRT, it is also true that they will not have their desired repeated-sampling coverage probability when point estimates are on or near a bound.
  • Finally, and perhaps most relevant to your post: negative variances are a signal that the fitted model is ill-suited to the data.