You are here

What is the numerical approximation method used to find maximum likelihood estimator from log-likelihood function in OpenMx?

2 posts / 0 new
Last post
Arnond Sakworawich's picture
Joined: 04/28/2012 - 03:21
What is the numerical approximation method used to find maximum likelihood estimator from log-likelihood function in OpenMx?

1.
What is the numerical approximation method used to find maximum likelihood estimator from log-likelihood function in OpenMx?

  1. How to deal with multiple estimators when you have to maximize log likelihood function here in OpenMx? Say profile likelihood or partial derivation (may not be the case).

  2. Can some of you please provide me a document on how the OpenMx works on the maximization of log likelihood function to find MLE?

Thank you so much.

Arnond

Ryne's picture
Offline
Joined: 07/31/2009 - 15:12
I'm not sure exactly what

I'm not sure exactly what you're asking for, but here goes:

The ML fit function used in OpenMx for covariance data is:
-2LL = (n-1) * (log(det(expectedCov)) + trace(observedCov %*% solve(expectedCov)))

Formally, the full ML function is
-2LL = (n-1) * (log(det(expectedCov)) + trace(observedCov %*% solve(expectedCov)) - log(det(observedCov)) - nvar),

so that the likelihood=0 and the -2LL is infinite when expectedCov=observedCov. However, log(det(observedCov)) and nvar are constant for any optimization, so they can be omitted from model fitting. I believe we add them in as constants at the end, but someone will correct me if I'm wrong.Ordinal data optimization stuff is in the ordinal data chapter of the documentation:
http://openmx.psyc.virginia.edu/docs/OpenMx/latest/Ordinal_Path.html

The equation for FIML is in the FIML/Row objective chapter:
http://openmx.psyc.virginia.edu/docs/OpenMx/latest/FIML_RowObjective.html#full-information-maximum-likelihood

Once the fitting function is defined, then that function is passed to the optimizer. At the moment, only NPSOL is supported, but others will be supported in the future. NPSOL uses a quasi-Newtonian method for optimization. It numerically estimates a gradient (first derivative of the fitting function with respect to the parameters), and starts by assuming a Hessian (second derivative) equal to an identity matrix. As optimization continues, the Hessian is updated using the info from the gradient to get a better and better estimate of the Hessian. Optimization stops when the gradient is close enough to zero and the expected step size is sufficiently small.

Does that answer your question?