Attachment | Size |
---|---|

Multigroup_LinearGM_Output.txt | 330.56 KB |

I am estimating a linear latent growth model using two twin samples and I am also estimating a common pathway model in the same samples. The latent slope and intercept are free to correlate with each other and with the latent factor from the common path model. I am trying to compute confidence intervals using mxCI and I am getting errors. When I use the verbose summary, I get the diagnostic and status code of "alpha level not reached iteration limit/blue" on many of my estimates. I've looked around at some documentation/other forum posts. but I haven't been able to figure out a solution. I tried using mxTryHard (output not included here) but the model converged after 1 attempt and I still got the same diagnostic on the confidence intervals. Does anyone know why this might be occurring or a possible fix? I've attached the full output in a text file (script included in output). Thanks!

In v2.9.6, there is a bug in the iteration limit check. It is cumulative instead of per-optimization. Therefore, you'll need to set a very high limit that will not end optimization prematurely. I suggest,

`mxOption(NULL, 'Major iterations', 1e9)`

Thanks for getting back to me! I've made the changes in iteration limit and that seems to have made a difference, but I've got a few new errors I'm having difficulty sorting out. I've attached the full output as well as noted the errors here.

Some of the confidence intervals have a diagnostic of "success", but some of them have a diagnostic of "active box constraint" and all of them have the status of "nonzero gradient/red".

When I ran the RefModel command for my fitted model, I receive the error below.

Running Saturated LinearGrowthACE with 418 parameters

Running Independence LinearGrowthACE with 104 parameters

Warning messages:

1: In model 'Saturated LinearGrowthACE' Optimizer returned a non-zero status code 6. The model does not satisfy the first-order optimality conditions to the required accuracy, and no improved point for the merit function could be found during the final linesearch (Mx status RED)

2: In model 'Independence LinearGrowthACE' Optimizer returned a non-zero status code 1. The final iterate satisfies the optimality conditions to the accuracy requested, but the sequence of iterates has not yet converged. Optimizer was terminated because no further improvement could be made in the merit function (Mx status GREEN).

The model itself has a status code of 0, its just the reference models that seem to have the error, which is something I haven't had happen before. Is the nonzero gradient error in my confidence intervals related to the nonzero gradient in the saturated model, or could those errors be unrelated? My current plan is to rerun the model from the results of the first model, like recommended on the openmx errors page for code 6 errors, but I thought I'd ask as well in the meantime, since this script has a fairly long runtime.

Thanks for your help!

Active box constraint means that some other parameter besides the one that you are trying to find the bound for is hitting a box constraint. That's probably not a big concern. I'm also not too concerned about "nonzero gradient/red" given that the optimization is otherwise successful.

> Is the nonzero gradient error in my confidence intervals related to the nonzero gradient in the saturated model, or could those errors be unrelated?

Unrelated. I wouldn't worry too much about the gradient error in the confidence intervals, but it would be nice to eliminate the error for the saturated model. Have you tried mxTryHard on the saturated model?

If you're using NPSOL or CSOLNP, then a confidence limit can have "nonzero gradient/red" and still be valid, as I explain here.

I am not so sure that the "active box constraint" diagnostic is nothing to worry about. It could be a signal that the confidence interval is too narrow, particularly if you're requesting confidence intervals on elements of MxAlgebras rather than on the free parameters

per se.Thanks for your advice.

Joshua, I haven't tried mxTryHard on the saturated model yet. Is that something I can use with the mxRefModels command or will I need to specify the saturated model by hand? I wonder if I should do that anyways, as this model does contain definition variables, which I saw today in the mxRefModels documentation that definition variables might not be accounted for properly in the saturated model produced by mxRefModels.

Rob, I am using NPSOL, I looked at the your other thread and I think that explanation would apply in my case, since some CIs do have a "success" output in the regular summary but the "nonzero gradient" status in the verbose output. Many (but not all) of the CIs that have "active box constraint" diagnostics are elements of MxAlgebras. Is there something else I can do to figure out if the interval is too narrow, and if they are too narrow, what sort of solutions are there?

Thanks again!

You could just try (re-)running the saturated model using

`mxTryHard()`

in place of`mxRun()`

. I actually don't know about the definition variables thing. The best person to ask would be Mike Hunter, since he wrote`mxRefModels()`

, but he's currently in the middle of moving halfway across a continent to start a new job, so he's not really available right now.I see from your output that some of your free parameters have lower bounds. You can just get rid of lower bounds you don't need, either by dropping the

`lbound`

argument from the relevant`mxMatrix()`

calls, or via`omxSetParameters()`

(see example here; you'd use`lbound=NA`

to get rid of the lbound). Or at the very least, change the lbounds to zero rather than 1e-5 or 1e-6 if it's not necessary for the parameter to be strictly positive. Other than that, my advice to Kelsey in the other thread applies here, too.BTW, do you really need all of those CIs? If not, then requesting only those CIs of substantive interest will make the diagnostic output easier to read, and shorten your MxModel's running time as well.

Hi again,

I'm working on switching to direct symmetric parameterization, like in your advice to Kelsey in the other thread, and I'm running into a bit of trouble. I tried to implement it in my current script I've been working with, though I did split the multigroup model up into one model for each state, to make it easier for me to find the source of an error. I receive a code 5 in one model and a code 10 in the other, and neither model is identified.

In the meantime, I'm starting with the direct symmetric sample scripts (simple univariate and bivariate analyses) from one of Hermine's sessions at the this year's twin Workshop and trying to work up to rebuilding my model from the bottom up. In the meantime I wanted to attach my current model script/output here in case anyone had a moment to troubleshoot. Thanks for all your help!

I don't see anything wrong with your syntax. Try

`mxTryHard()`

?