Should I ever need lower bounds in a twin model?
I am currently starting to familiarize myself with different variations of the ACE-model as well as nuclear twin family designs in OpenMx and although coming from econometrics I dreaded working with SEMs so far I find the ride quite exhilarating.
While trying to reproduce certain results from the literature to broaden my understanding I sometimes encounter a strange behavior where I obtain a parameter estimate which is quite close to what I would expect with the only issue being that it is actually negative.
Once I set lower bounds for all parameters where negative values should not occur the problem vanishes and the estimate in question just switches to being almost the same but positive, therefore allowing me to get to the expected result.
As this happened multiple times with different variables and models by now I am confused about the significance of this behavior. Does it indicate that there exists a general underlying problem with my setup? Might I have missed some crucial component which ought to be specificed in every model? Or is this just something I should expect in twin-models and therefore always set lower bounds to eliminate the possibility of negative variance estimates?
Thanks for your help and continuing development of this great piece of software!
Tobias
sign indeterminacy
More generally, the phenomenon can occur when sources of variance are modeled as coming from latent variables with unit variance, connected to manifest variables via a one-headed path with a free parameter on it--which is the case for the "Cholesky" parameterization of biometrical ACE models. Thus, under the Cholesky parameterization, some (but not all) of the one-headed path coefficients that parameterize the _A_, _C_, and _E_ matrices will be sign-indeterminate.
So, if I'm guessing right about the circumstances in which you observe what you describe, it's not an indicator that something is wrong, though it does have some implications. In particular, you may see confidence intervals for sign-indeterminate parameters for which the lower and upper limits are the same (or nearly the same) in absolute magnitude, but opposite in sign. Such confidence intervals should **not** be interpreted as saying that the parameter is not significantly different from zero.
You can put a lower bound on sign-indeterminate parameters if you want. It usually makes no difference, although bounds can make confidence-interval optimization more difficult. One instance in which you *ought* to place bounds is if you're doing nonparametric bootstrapping, because the bound will prevent the sign from flipping arbitrarily across bootstrap re-sampling.
BTW, if you're using the Cholesky parameterization, you might want to reconsider it.
Glad to hear you like OpenMx!
Log in or register to post comments
In reply to sign indeterminacy by AdminRobK
Thanks!
Log in or register to post comments
In reply to Thanks! by twolf
You're welcome. Glad to hear
Log in or register to post comments
In reply to You're welcome. Glad to hear by AdminRobK
continuing the thread about negative CIs
I would like to ask a follow-up question as I encountered a similar problem.
I also get sign-indeterminate paths in a Cholesky model (this occurs if I do parameterization, but also if I look at the full unconstrained model). Accordingly and as you described, I get confidence intervals that are problematic to interpret (e.g., their sign suggests that the path is not significant when it clearly is). How would you recommend to present such results? Does it make sense to present the confidence intervals? If not, should I report standard errors and significance testings instead, or other indications?
Thank you very much for the help
Log in or register to post comments
In reply to continuing the thread about negative CIs by lior abramson
make inference about variance components
Log in or register to post comments
In reply to make inference about variance components by AdminRobK
continuing the thread about negative CIs
I would like to ask another question in that matter- sometimes I get confidence intervals that are not in a reasonable range. For example, I get the result: .49 [-.501- .628] (this path is significant). Thus, even if I present CIs only for the corresponding variance components, the point estimate is not in the range of the CI (24% [ 25%-39%]).
Does this make sense? Or, is there something wrong with the CI computation? I should note that I used the 'umxConfint' command to extract the CIs.
Thanks again
Log in or register to post comments
In reply to continuing the thread about negative CIs by lior abramson
output?
That sounds weird. Could you post the actual output from a case like that?
Log in or register to post comments
In reply to continuing the thread about negative CIs by lior abramson
Difficulty with CIs on products of parameters
I suspect that the software has figured out an 'equivalent' CI nearby, because of a formula. I think you are likely using a Cholesky A = LL' where the problem emerges fairly frequently. If you are looking at the CI of the element A12, it equals (L11 * L21). However, the expected variances are L11*L11 and L21*L21+L22*L22. It therefore makes no difference to the expected covariance matrix if L11 is positive or negative. I suspect the CI algorithm figured out that flipping the sign of L11 and that of L22 produces the same fit, and found a nearby lower CI that is actually the sign-flipped lower CI. Make sense?
Bounding L11 and L22 to be non-negative should fix the problem.
Log in or register to post comments
In reply to Difficulty with CIs on products of parameters by AdminNeale
Difficulty with CIs on products of parameters
In reply to Robert, I am attaching (in a PDF document) the input and output of my CIs. The problematic CI (and its estimate) are marked in yellow.
Responding to Neale: You are right, this problem does emerge, for example, in the CI of the D element 12. I would like to ask: when you write L11 and L22, do you mean to bound the lower CI of element 11 and the lower CI of element 22 (i.e., the lower CIs of all the diagonals)? I couldn't find the command that does that...
Thank you again,
Lior
Log in or register to post comments
In reply to Difficulty with CIs on products of parameters by lior abramson
alpha level not reached
If you can't get an acceptable profile-likelihood CI for that element, you could instead get robust standard errors and then use
mxSE()
to get standard errors for the element, and form a CI from those standard errors. Or, use bootstrapping.Log in or register to post comments
In reply to alpha level not reached by AdminRobK
Manually
Bootstrap is another alternative, but you do need to be careful to exclude invariance by sign: if the model fits as well with, e.g., all the factor loadings multiplied by -1 as it does with them all multiplied by +1, then a boundary of one loading to not go negative would be needed to avoid the mirror-image equivalent solutions during bootstrap runs.
Log in or register to post comments