You are here

Custom Optimizer in OpenMx

6 posts / 0 new
Last post
Jannik's picture
Offline
Joined: 10/01/2018 - 04:42
Custom Optimizer in OpenMx

Hello,

I'm currently working on a custom optimizer for lasso regularized OpenMx models. The fitting function in these models is:
$$f(\theta) = L(\theta)+\sum_j |\theta_j|$$
where $L$ is the unregularized Likelihood. The problem is that the second part of the fitting function is non-differentiable, so I can't use the built in optimizers (if I understood it correctly). However, I want to use the gradients of the Likelihood with respect to the parameters computed by OpenMx for the first part of the fitting function in my custom optimizer. To get these gradients, I am using a custom compute plan:

gradientModel = mxModel(mxObject,
    mxComputeSequence(steps=list(
        mxComputeNumericDeriv(checkGradient = FALSE, hessian = FALSE))
    )
)
mxRun(gradientModel)
 
gradients = gradientModel$compute$steps[[1]]$output[["gradient"]]

I then compute a step direction and a step size and update the parameters using omxSetParameters(gradientModel, ...). I repeat this procedure until a convergence criterion is reached. However, building and running the gradientModel in each iteration makes everything quite slow. Is there a faster way to get the gradients of the model?

Thanks,
Jannik

jpritikin's picture
Offline
Joined: 05/24/2012 - 00:35
Wow, neat
AdminRobK's picture
Offline
Joined: 01/24/2014 - 12:15
abs() probably no big deal; derivative-free optimization
The problem is that the second part of the fitting function is non-differentiable, so I can't use the built in optimizers (if I understood it correctly).

Well, it is true that the 3 main optimizers (CSOLNP, SLSQP, and NPSOL) are derivative-based quasi-Newton optimizers. Nonetheless, they can still get good results if the fitfunction involves the absolute-value function, or (NPSOL and SLSQP only) if the constraint functions involve abs(). Be advised, though, that OpenMx may give spurious status Red if the use of abs() in the fitfunction makes it impossible to zero the gradient at the solution.

However, OpenMx has built-in support for derivative-free optimizers as well. They're not settable as "Default optimizer", so you need to make a custom compute plan to use them (which shouldn't be too difficult for you, since you've already figured out how to make custom compute plans). Specifically, OpenMx has an original Nelder-Mead implementation, as well as two implementations of simulated annealing.

Jannik's picture
Offline
Joined: 10/01/2018 - 04:42
Thanks!

Thank you very much for your answers! I was not aware of the mxregsem package and will try to figure out how regularization is implemented there. In my experience, the approximate solutions are quite good, but require setting a manual threshold below which parameters are evaluated as being zero. Also, I observed them to be relatively instable from time to time, but could not figure out when exactly this happens. This is why I wanted to test the optimizer used in the lslx package by Huang et al. for my models.
I have not yet tried out any of the derivative free optimizers of OpenMx, but this seems to be a very promising approach, too.

cjvanlissa's picture
Offline
Joined: 04/10/2019 - 12:43
@jannik I'm curious if you

@jannik I'm curious if you got this working? mxregsem is explicitly in the testing phase, and I couldn't find another working alternative yet.

Best,
Caspar

jpritikin's picture
Offline
Joined: 05/24/2012 - 00:35
mxregsem

mxregsem doesn't work with OpenMx newer than v2.19.1. I'm working on rewriting and integrating the functionality of mxregsem into OpenMx. This will likely be ready for the next release.