You are here

estimation problems after updating to 1.0.0

14 posts / 0 new
Last post
manu's picture
Offline
Joined: 10/08/2009 - 12:36
estimation problems after updating to 1.0.0
AttachmentSize
Microsoft Office document icon MxVersionComparison.doc51 KB

Dear all,

I estimate a model with some simple linear constraints. The OpenMx input is provided in the attached document. My data set contains a lot of missing values so I use FIML. With OpenMx Version 0.4.1-1320 the model runs smoothly and yields correct results. I also attach the output for your information. The results and LL are exactly identical to Mplus version 5.1 (I also attach this output for your information), so I am confident that these results are indeed correct. However, when running the same model in OpenMx Version 1.0.0-1448 I get the error message "In model 'ADMunequal' NPSOL returned a non-zero status code 6. The model does not satisfy the first-order optimality conditions to the required accuracy, and no improved point for the merit function could be found during the final linesearch (Mx status RED) ". As a consequence, all parameter estimates are completely off (I attach the output below). What went wrong?

Thank you very much for your help!

manu

mspiegel's picture
Offline
Joined: 07/31/2009 - 15:24
Hi. Could you provide us

Hi. Could you provide us with some synthetic data and then we can debug the model. See http://openmx.psyc.virginia.edu/wiki/generating-simulated-data for help.

mspiegel's picture
Offline
Joined: 07/31/2009 - 15:24
Ah. I think I have spotted

Ah. I think I have spotted the problem. Try running your script in OpenMx 1.0 by adding the line

ADMunequal <- mxOption(ADMunequal, "RAM Inverse Optimization", "No")

and see if that fixes the problem. We use an optimization on the (I - A) ^ -1 calculation that is turned on when A is a MxMatrix object. We forgot to turn off the optimization when there are square brackets in the labels of the A matrix. If that fixes the problem, then I'll commit a patch to the 1.0 code series.

manu's picture
Offline
Joined: 10/08/2009 - 12:36
yes, that's it - thank you

yes, that's it - thank you very much!

Can you direct me to some more detailed information on the optimization you use for the (I-A)^-1 calculation? I assume by turning the optimization off, the matrix A is assumed to be of full rank (i.e., a completely unstructured matrix)? Even with the square brackets, in my example, the A matrix exhibits a very simple structure and it would be extremely helpful in terms of computation time if this structure could be taken into account when computing (I-A)^-1. Especially as A gets larger this becomes a very important topic.

Best,

manu

Ryne's picture
Offline
Joined: 07/31/2009 - 15:12
The optimization uses a

The optimization uses a Taylor-series expansion of (I-A)^-1, replacing the inversion with I + A + A^2 + .... A^k, until A^k equals zero. Things like definition variables and square-bracket substitutions can make the optimization stop before it should (i.e., use I+A when it should use I+A+A^2).

mspiegel's picture
Offline
Joined: 07/31/2009 - 15:24
Hmm. We also continue to

Hmm. We also continue to perform the Taylor-series in the case of definition variables in the A matrix. Should we not be doing that?

Ryne's picture
Offline
Joined: 07/31/2009 - 15:12
We should keep doing it, but

We should keep doing it, but we need to recognize that fixed parameters that reference a definition variable or element of another matrix should be treated as free parameters. I suspect that manu's constraints (which are fixed, with square-bracket labels and values of NA) get treated as fixed values of zero, which affect the Taylor-series. Tim suggested that I tinker with mxRun to try and view the number of dimensions in the expansion to prove it. Can you think of an easier way to view the dimension of the Taylor-series expansion? I got stuck on Tim's recommendation of the "browser" function.

mspiegel's picture
Offline
Joined: 07/31/2009 - 15:24
I have a code patch that

I have a code patch that treats all definition variables and square bracket labels as free parameters for the purpose of calculating the maximum path length. But I can't test whether there is any speedup of using this method versus calculating the inverse directory, unless I get some sample data.

manu's picture
Offline
Joined: 10/08/2009 - 12:36
thank you ryne and mspiegel

thank you ryne and mspiegel for your helpful comments!
Indeed it would be very nice to have the optimization via Taylor series expansion also in case of "squared bracket labels".
Unfortunately I cannot provide you with the original data, but please find attached a simulated data set using the fakeData() function along with the input file (As described above, it works nicely if the RAM inverse optimization is turned off, but returns the error message by using the default setting).

mspiegel's picture
Offline
Joined: 07/31/2009 - 15:24
Thanks for the data set.

Thanks for the data set. When I run the model using the (I -A) inverse optimization and not using it, the free parameter estimates are now identical after applying the patch. You could try running the version in the repository on your full dataset. It is possible that the (I - A) inverse optimization does not work for your model. It is a little tricky to get the length of the Taylor expansion. When you have RAM model with raw data, it is stored in modelOut@objective@metadata@depth (the RAM objective is converted to a FIML objective with RAM metadata). For this script, the value returned is NA, which means no optimization can be performed.

Anyways, to test the repository you want to follow the directions here: http://openmx.psyc.virginia.edu/wiki/howto-build-openmx-source-repository but make sure to build from the directory tags/stable-1.0 instead of trunk.

Ryne's picture
Offline
Joined: 07/31/2009 - 15:12
Looking more closely, the I-A

Looking more closely, the I-A optimization isn't appropriate for your model (and is correctly disabled by Mike's patch) because your model contains a number of loops. Parameters in the diagonal of the A matrix are automatically loops, as are a number of other sets of parameters you have. Without the I-A optimization, RAM (& RAM FIML) models tend to be faster than ML and FIML models in our testing, but won't be necessarily be for all models, especially large models. RAM models require two matrix inversions instead of the one required in ML/FIML, so if you build models with very very large RAM matrices, it may be worth looking at building your models with a set of smaller matrices rather than the gigantic (k +f) * (k + f) A and S matrices in RAM.

manu's picture
Offline
Joined: 10/08/2009 - 12:36
Thanks again! Just to make

Thanks again!

Just to make sure I understood you correctly, a quick clarification question:
All the patch does is turning the I-A optimization off, correct? (i.e., for the time being, it would be fine to apply the ADMunequal <- mxOption(ADMunequal, "RAM Inverse Optimization", "No") as described above)

Regarding the RAM versus “LISREL” notation, I completely agree that using a notation with smaller matrices would be advantageous.

My actual problem, however, is somewhat more complicated but at the same time off-topic, so please do not feel obliged to respond. I sometimes deal with very large matrices (say an A matrix as in the upper example but of size 1000*1000). Even if they are somewhat smaller in LISREL notation they are still huge. However, these are actually very simple sparse matrices, and I found that R does a pretty good job in dealing with matrices of such size (and for more complex symmetric sparse matrices, there exists the SparseM package). If I use matrices of such size in OpenMx, however, computation time increases dramatically and I wonder whether there is a way to speed things up by making use of the sparse nature of the matrices (e.g., using matrix.csr objects in the fitting function?). From the manual I read that NPSOL is not intended for large sparse problems, so I would need to reduce the size of the matrices prior to handing the problem to the optimizer. I apologize for the off-topic question, but nevertheless appreciate any suggestions – I am running out of ideas;-)

Ryne's picture
Offline
Joined: 07/31/2009 - 15:12
The patch, based on

The patch, based on mspiegel's description, treats variables labeled with definition variables or square brackets as free parameters for the purposes of determining the depth required for the Taylor series expansion. The (I-A)^-1 portion can be stated as I+A+A^2+...+A^k, where k is the maximum path length. For instance, a model where x is regressed on y, and z predicts both x and y would set a depth of k=2, because the longest set of one-headed arrows one can follow is z -> y -> x.

Rather than tracing the paths by hand, we can just keep checking A^k, increasing k until A^k is zero. This works provided every path has a non-zero value. The back-end is smart enough to set all free parameters to one for the purposes of estimating depth, so that A^k becomes sufficient for estimating required depth. What it didn't do was realize that a fixed parameter labeled with "data.def" or "constraint[1,1]" should be given the same treatment as free parameters. The patch fixes that.

Your model didn't get the (I-A)^-1 speed up because your model contains loops. That is, if x is regressed on itself, or x is regressed on y and y regressed on x, then k must be infinity. In that case, we have to actually invert I-A rather than use the Taylor-series solution.

Moving to your not-really-off-topic question, I know that we discussed sparse matrix issues early in development, but none of that has moved into OpenMx quite yet. Depending on your model, clever model specification and data formatting will get you part of the way there. Absent that, I encourage you to keep asking about sparse matrix issues and perhaps get involved in implementing their support in OpenMx. If anyone else has info about this, please pass it on!

manu's picture
Offline
Joined: 10/08/2009 - 12:36
thank you for your very

thank you for your very informative response - makes perfect sense to me!

Regarding the somewhat-off-topic question, I guess I have to do some more thinking on this...else I can only echo your calling that if anyone else has any information or ideas about this, please pass it on!

Thanks again!