Hello,
I'm currently working on a custom optimizer for lasso regularized OpenMx models. The fitting function in these models is:
$$f(\theta) = L(\theta)+\sum_j |\theta_j|$$
where $L$ is the unregularized Likelihood. The problem is that the second part of the fitting function is non-differentiable, so I can't use the built in optimizers (if I understood it correctly). However, I want to use the gradients of the Likelihood with respect to the parameters computed by OpenMx for the first part of the fitting function in my custom optimizer. To get these gradients, I am using a custom compute plan:
gradientModel = mxModel(mxObject, mxComputeSequence(steps=list( mxComputeNumericDeriv(checkGradient = FALSE, hessian = FALSE)) ) ) mxRun(gradientModel) gradients = gradientModel$compute$steps[[1]]$output[["gradient"]]
I then compute a step direction and a step size and update the parameters using omxSetParameters(gradientModel, ...). I repeat this procedure until a convergence criterion is reached. However, building and running the gradientModel in each iteration makes everything quite slow. Is there a faster way to get the gradients of the model?
Thanks,
Jannik