You are here

Should I ever need lower bounds in a twin model?

12 posts / 0 new
Last post
twolf's picture
Offline
Joined: 04/24/2020 - 10:51
Should I ever need lower bounds in a twin model?

Hello,

I am currently starting to familiarize myself with different variations of the ACE-model as well as nuclear twin family designs in OpenMx and although coming from econometrics I dreaded working with SEMs so far I find the ride quite exhilarating.

While trying to reproduce certain results from the literature to broaden my understanding I sometimes encounter a strange behavior where I obtain a parameter estimate which is quite close to what I would expect with the only issue being that it is actually negative.

Once I set lower bounds for all parameters where negative values should not occur the problem vanishes and the estimate in question just switches to being almost the same but positive, therefore allowing me to get to the expected result.

As this happened multiple times with different variables and models by now I am confused about the significance of this behavior. Does it indicate that there exists a general underlying problem with my setup? Might I have missed some crucial component which ought to be specificed in every model? Or is this just something I should expect in twin-models and therefore always set lower bounds to eliminate the possibility of negative variance estimates?

Thanks for your help and continuing development of this great piece of software!
Tobias

AdminRobK's picture
Offline
Joined: 01/24/2014 - 12:15
sign indeterminacy

You don't provide any particular instances of the phenomenon you describe, but there are some common cases when it's likely to occur, and is benign. Under some model parameterizations, there will be free parameters that are indeterminate with regard to sign. A very simple example would be if you were only analyzing one variable (i.e., the covariance matrix would actually be a scalar, the variable's variance), and you decided to parameterize the variance in terms of its signed square root--that is, the free parameter in question would be the signed square root, but the model-expected variance would be the square of the free parameter. In that case, there would be two possible solutions w/r/t the variance: one at which the free parameter equals the MLE of the standard deviation, and the other at which the free parameter equals -1 times the MLE of the SD.

More generally, the phenomenon can occur when sources of variance are modeled as coming from latent variables with unit variance, connected to manifest variables via a one-headed path with a free parameter on it--which is the case for the "Cholesky" parameterization of biometrical ACE models. Thus, under the Cholesky parameterization, some (but not all) of the one-headed path coefficients that parameterize the A, C, and E matrices will be sign-indeterminate.

So, if I'm guessing right about the circumstances in which you observe what you describe, it's not an indicator that something is wrong, though it does have some implications. In particular, you may see confidence intervals for sign-indeterminate parameters for which the lower and upper limits are the same (or nearly the same) in absolute magnitude, but opposite in sign. Such confidence intervals should not be interpreted as saying that the parameter is not significantly different from zero.

You can put a lower bound on sign-indeterminate parameters if you want. It usually makes no difference, although bounds can make confidence-interval optimization more difficult. One instance in which you ought to place bounds is if you're doing nonparametric bootstrapping, because the bound will prevent the sign from flipping arbitrarily across bootstrap re-sampling.

BTW, if you're using the Cholesky parameterization, you might want to reconsider it.

Glad to hear you like OpenMx!

twolf's picture
Offline
Joined: 04/24/2020 - 10:51
Thanks!

I am relieved - you precisely describe all the situations where I encountered this issue. Thank you especially for the literature recommendation. I will rethink whether the utilization of Cholesky parameterization is warranted and compare its results it to the direct estimation approach.

AdminRobK's picture
Offline
Joined: 01/24/2014 - 12:15
You're welcome. Glad to hear

You're welcome. Glad to hear it.

lior abramson's picture
Offline
Joined: 07/21/2017 - 13:13
continuing the thread about negative CIs

Hi,
I would like to ask a follow-up question as I encountered a similar problem.

I also get sign-indeterminate paths in a Cholesky model (this occurs if I do parameterization, but also if I look at the full unconstrained model). Accordingly and as you described, I get confidence intervals that are problematic to interpret (e.g., their sign suggests that the path is not significant when it clearly is). How would you recommend to present such results? Does it make sense to present the confidence intervals? If not, should I report standard errors and significance testings instead, or other indications?

Thank you very much for the help

AdminRobK's picture
Offline
Joined: 01/24/2014 - 12:15
make inference about variance components

Present confidence intervals for the corresponding variance components, not for the one-headed paths that are sign-indeterminate. If you really want to test the null hypothesis that a one-headed path is zero, then either fit another model that fixes it to zero and do a likelihood-ratio test, or perhaps get robust standard errors (which is not possible for all MxModels) and do a Wald test.

lior abramson's picture
Offline
Joined: 07/21/2017 - 13:13
continuing the thread about negative CIs

Thank you very much for your fast reply!
I would like to ask another question in that matter- sometimes I get confidence intervals that are not in a reasonable range. For example, I get the result: .49 [-.501- .628] (this path is significant). Thus, even if I present CIs only for the corresponding variance components, the point estimate is not in the range of the CI (24% [ 25%-39%]).

Does this make sense? Or, is there something wrong with the CI computation? I should note that I used the 'umxConfint' command to extract the CIs.

Thanks again

AdminRobK's picture
Offline
Joined: 01/24/2014 - 12:15
output?
I would like to ask another question in that matter- sometimes I get confidence intervals that are not in a reasonable range. For example, I get the result: .49 [-.501- .628] (this path is significant). Thus, even if I present CIs only for the corresponding variance components, the point estimate is not in the range of the CI (24% [ 25%-39%]).

That sounds weird. Could you post the actual output from a case like that?

AdminNeale's picture
Offline
Joined: 03/01/2013 - 14:09
Difficulty with CIs on products of parameters

Hi

I suspect that the software has figured out an 'equivalent' CI nearby, because of a formula. I think you are likely using a Cholesky A = LL' where the problem emerges fairly frequently. If you are looking at the CI of the element A12, it equals (L11 * L21). However, the expected variances are L11L11 and L21L21+L22*L22. It therefore makes no difference to the expected covariance matrix if L11 is positive or negative. I suspect the CI algorithm figured out that flipping the sign of L11 and that of L22 produces the same fit, and found a nearby lower CI that is actually the sign-flipped lower CI. Make sense?

Bounding L11 and L22 to be non-negative should fix the problem.

lior abramson's picture
Offline
Joined: 07/21/2017 - 13:13
Difficulty with CIs on products of parameters

Thank you both Robert and Neale for the help.

In reply to Robert, I am attaching (in a PDF document) the input and output of my CIs. The problematic CI (and its estimate) are marked in yellow.

Responding to Neale: You are right, this problem does emerge, for example, in the CI of the D element 12. I would like to ask: when you write L11 and L22, do you mean to bound the lower CI of element 11 and the lower CI of element 22 (i.e., the lower CIs of all the diagonals)? I couldn't find the command that does that...

Thank you again,
Lior

File attachments: 
AdminRobK's picture
Offline
Joined: 01/24/2014 - 12:15
alpha level not reached

The lower confidence limit you highlight is flagged as "alpha level not reached"; the fitfunction value at that lower limit is way too high, meaning that the resulting confidence interval has expected coverage probability much higher than 95%. You were correct to be suspicious about it, since it's probably a result of an optimization failure.

If you can't get an acceptable profile-likelihood CI for that element, you could instead get robust standard errors and then use mxSE() to get standard errors for the element, and form a CI from those standard errors. Or, use bootstrapping.

AdminNeale's picture
Offline
Joined: 03/01/2013 - 14:09
Manually

Sometimes it is worthwhile attempting to get a CI manually. For a free parameter, it is easy - just fix it at values a bit above/below the MLE, and refit the model, watching the -2lnL increase until it reaches ~3.84. For an element of an mxAlgebra, you would need to add a non-linear constraint that equates the element to the values you try getting progressively further from the MLE. Yes this is tedious, which is why mxCI() exists, but for one or two problematic elements it may be worthwhile.

Bootstrap is another alternative, but you do need to be careful to exclude invariance by sign: if the model fits as well with, e.g., all the factor loadings multiplied by -1 as it does with them all multiplied by +1, then a boundary of one loading to not go negative would be needed to avoid the mirror-image equivalent solutions during bootstrap runs.