Attachment | Size |
---|---|
macOutput.txt | 3.45 KB |
linuxOutput.txt | 3.55 KB |
Attached are two outputs, one from OpenMx 2.0.1.4157 using OS X on a Mac and the other from OpenMx 2.2.2 on Red Hat Linux. In both situations the models converge when I use mxExpectationNormal and mxFitFunctionML. But when I manually set the fit function to -log(likelihood) using mxFitFunctionAlgebra, mxRun crashes in both operating systems (albeit with different errors).
Am perplexed.
Greg
I can see at least two reasons why the algebra fit function wouldn't work. First, the algebra used is not the minus 2 log likelihood.
Yes, it's just a scale and location adjustment on the fit function, but when weird things are happening it's often best to eliminate unnecessary sources of variability.
Second, the model is not identified without an additional constraint (in the form of an mxAlgebra) on the uniquenesses.
Rob and Mike,
(1) minimizing -log(L) and minimizing -2log(L) will lead to the same solution given that the first derivatives for both functions will converge to small numbers close to 0. If there is any advantage, then it goes to -log(L) because the derivatives will be 1/2 those of -2log(L)
(2) as I pointed out years ago in this forum (openmx.psyc.virginia.edu/thread/360) minimizing -2Log(L) will give incorrect standard errors. In fact, in other circumstances mxFitFunctionAlgebra() gave me the wrong standard errors. That is why I started exploring the issue.
(3) The model is indeed identified. Otherwise, the MxModel called mod1 would not have converged to the correct solution. (I verified that using a completely different minimizer before I posted).
Greg, the major problem with your MxModel using the algebra fitfunction is that there is no safeguard that keeps the model-predicted covariance matrix within the parameter space, i.e. the set of all symmetric positive-definite matrices. The built-in ML fitfunction has a "soft" feasibility constraint, in that it returns a non-finite value if it is passed a non-PD covariance matrix; the optimizer then tries to work around that implicit boundary. In the script below, I show that for this particular problem, it appears that ensuring that the unique variances be strictly positive is sufficient to keep the predicted covariance matrix within the parameter space.
You will further see that the fitfunction values at the solution, the point estimates, and the standard errors agree between mod1 (using built-in ML) and mod2 (using your algebra fitfunction):
The lower bound on the unique variances might be what Mike hunter had in mind when he mentioned a constraint on the uniquenesses, but his exact recommendation doesn't apply here, as you are not analyzing a correlation matrix. I agree that the model is identified;
mxCheckIdentification(mod1)
says that it is locally identified at its start values.FWIW, I ran this code with an OpenMx revision built from the head of the git repository this morning:
" … the major problem with your MxModel using the algebra fitfunction is that there is no safeguard that keeps the model-predicted covariance matrix within the parameter space"
AH! That would explain matters! At the same time, it is problematic for anyone trying to do novel things with OpenMx. I've put these data through several optimizers and they all work fine. If the tuning of the optimizers is so sensitive with mxFitFunctionAlgebra, then I'll avoid using OpenMx with it.
Also, there is nothing in the documentation to the effect that "OpenMx will assume that the algebra fitfunction is -2logL, so its automatic SEs will be wrong." You definitely need to include that in the documentation so that people are aware of it.
And Mike--the simulations that I did were on an older version of OpenMx. In fact, those simulations along with some done by Lindon Eaves convinced the OpenMx folks at the time to change their way of calculating the standard errors to get the correct ones.
Agreed. The help page has already been updated in the git repository. BTW, in recent versions of OpenMx (including 2.2), the user can specify the "units" of an algebra fitfunction, and can explicitly tell OpenMx whether the fitfunction is or is not -2logL.
Bear in mind that the optimizer is just trying to minimize a loss function. It doesn't know what the parameter space is unless it is told, either explicitly (via bounds or constraints) or implicitly (by soft feasibility constraints built into the fitfunction). The only major thing your model was missing was a bound to keep the uniquenesses positive (i.e., to avoid the well-known problem of "Heywood cases").
Minimizing the -2log(L) only gives incorrect standard errors when you use the wrong formula for them. Going back to your simulation (http://openmx.psyc.virginia.edu/thread/360), it appears you use the wrong formula for the standard errors.
These are not the standard errors that OpenMx reports. Later you correct the formula by multiplying by sqrt(2). These are what OpenMx reports. I verified this by running your simulation and then extracting the standard errors from summary. Alternatively, you could also get these at model$output$standardErrors. Here are the results from the last replication.
As for the model, it is locally identified, but not globally. As long as it doesn't wander too far from the true solution, it's identified. But there are infinitely many other solutions that also fit.
Empirically, I found it sufficient to add a lower bound on the residual variances to identify the model.
As far as the standard errors, the -2*log(L) model gives the correct standard errors in summary. The standard errors reported by your mxAlgebraFitFunction are the standard errors based on the Hessian of your algebra fit function, and may or may not correspond to anything meaningful depending on the fit function used.
As Rob suggested, setting the likelihood scale
makes OpenMx use -log(L). It also gives identical (and correct) standard errors when compared to using the -2*log(L).
If you really want the fit function to be -logL instead of -2logL, you can change the relevant option with the command
and then build your MxModel using an MxExpectationNormal and MxFitFunctionML.