You are here

Does Latent Growth Modelling work in all cases??

3 posts / 0 new
Last post
cmgriffi's picture
Offline
Joined: 01/26/2012 - 20:47
Does Latent Growth Modelling work in all cases??

Dear All,

I'm undertaking a 4th year project in final year in university. I run my longitudinal data and it seems final. My supervisor has now asked me run randomly generated data but i keep getting the same error:The model does not satisfy the first-order optimality conditions to the required accuracy, and no improved point for the merit function could be found during the final linesearch (Mx status RED). Can anyone help please?

mdewey's picture
Offline
Joined: 01/21/2011 - 13:24
More details might be useful

In a sense OpenMx is working here, it is reporting that your dataset is not adequate to fit your model. That suggests you need to give us more detail about how you are generating your simulated dataset. Are you generating a multivariate normal with a similar mean and covariance structure to that of your substantive dataset?

In answer to the question in your subject line, just to reassure you, what is happening to you is not evidence that the statistical deities have got it in for you. It happens to the rest of us too.

Ryne's picture
Offline
Joined: 07/31/2009 - 15:12
I'll first point you to a

I'll first point you to a previous thread where we talk about code 6 (http://openmx.psyc.virginia.edu/thread/754). As Mike Neale says in that thread, code 6s are supposed to indicate problems, but there can be false positives so to speak.

I can't seem to find where we have this in the wiki and docs, so I'll put it here. Our optimizer checks two things when looking for whether or not a model has converged. One is "optimality", or whether or not the norm of the gradient (first derivative of the likelihood function with respect to the parameters) is sufficiently close to zero. The second it "convergence", or how small the expected step size is. OpenMx and most optimizers work by taking a set of starting values, figuring out the gradient or direction in which the parameters should move to improve fit, and moving the values of those parameters in that direction.

Status 0 (green) occurs when the optimizer gets to a point where the gradient says "don't move" and the step size says "take incredibly tiny steps," or more accurately when the optimality and convergence criteria are met. Status 1 (also green) is when optimality is OK but convergence isn't, which is another way of saying that optimality thinks it has found the correct answer, but the step size thinks that we got there well earlier than we should. Status 6 (red) is when neither criteria is met; the gradient says we should take a step, the step size says we should make a non-tiny step, but whichever way the optimizer changes the parameters, it can't improve fit.

As to your problem, the fact that you're getting the same error regardless of which data you get suggests there may be a problem with your model. If you can, post the code and we'll look it over.

Status 6s do not mean that you don't have the overall minimum, it just means that you have to retry the model with lots of different starting values to make sure that you always find the same answer.