mxComputePenaltySearch

Happy to see this implemented (thank you!). Questions about how it works. I've seen the vignette here: https://cran.r-project.org/web/packages/OpenMx/vignettes/regularization.html
But, that includes use of mxPenaltySearch. The one above (but not mxPenaltySearch) has an option to change ebicGamma, which is desired. I don't see documentation or examples for mxComputePenaltySearch, but would guess it could form part of a custom compute plan.
- If one wants to do something like mimic mxPenaltySearch, would just passing mxComputeGradientDescent() suffice or is it safter to use the default compute plan?
- How does one ensure that mxComputePenaltySearch searches the appropriate values for lambda (appropriate arguments to mxPenaltyLasso or ElasticNet or Ridge?).
- Do we know how the choice of optimizer affects its performance? Guessing NPSOL would be best, but not sure about the others than come built in
Not entirely a reprex, but a code snippet from a student, would this look about right? assuming fitMod sets up a structural model, and other appropriate arguments are passed to what, lambda, lambda.max and lambda.step:
modPenaltyRidge <- mxModel(
fitMod,
mxPenaltyRidge(
what = regParameters,
name = "ridgePenalty",
lambda = lambda.initial,
lambda.max = maximum.lambda,
lambda.step = lambda.step,
hyperparams = "lambda"
),
mxMatrix("Full", 1, 1, free = TRUE, values = 0, labels = "lambda"),
# Explicit penalty search compute step
mxComputePenaltySearch(
plan = mxComputeSequence(list(
mxComputeGradientDescent()
))
)
)
Something like this runs, just not sure how to verify it did what we expect. mxPenaltySearch seems to have different behavior that seems more auditable, if that makes any sense.
Many thanks!