mxTryHard attempts in umx

Posted on
No user picture. mirusem Joined: 11/19/2018
Forums
Is there a way to modify/control the amount of tries in the tryHard option for umx for any given function being called?

I appreciate it!

Replied on Sun, 12/05/2021 - 17:24
Picture of user. tbates Joined: 07/31/2009

Hi mirusem,
tryHard = "yes" in the umx functions allows different forms of tryhard, but doesn't expose the guts of each of the mxTryHard family of functions: To get down into the nitty gritty, call `model = mxTryHard*(model)`

There are just too many parameters and umx is designed to make using this easy for 90% of cases, with an easy fallback to more complex options like extraTries, greenOK, scale = 0.25, initialGradientStepSize etc. etc.

Replied on Tue, 12/07/2021 - 07:02
No user picture. mirusem Joined: 11/19/2018

In reply to by tbates

I see, interesting. That does make sense. I am glad you are free to call directly OpenMx functions in the case of umx (right)? That's kind of cool actually.

So that would be just mxTryHard(model) no * (I get an error with the star)? Unless by * you mean whichever mxTryHard type we want to do. I think it works with respect to that, but I guess starting values is an issue for these things (since it ultimately leads to an error with one of the models for certain things I am looking into)--so for those cases in the interim I am going with Hermine's other script which is strictly OpenMx since I am still not yet super familiar as to how to alter the starting values in umx (without yet looking into it more deeply, though I figure umxSetParameters() with some specification is the trick).

Thanks again!

Replied on Tue, 12/07/2021 - 08:31
Picture of user. tbates Joined: 07/31/2009

In reply to by mirusem

hi - yes, '*' meant "choose from options"

The joy of tryHard, esp. `tryHard = "ordinal"` (which calls `mxTryHardOrdinal` is that it will explore new starts in a way that would be tiresome to emulate as a human.

PS: `umxRAM` and the umx twin models are good are picking viable start values. For occasions that doesn't work, you might also consider `mxAutoStart` That runs a WLS version to find start values that are in the ball park.

Replied on Tue, 12/21/2021 - 14:30
No user picture. mirusem Joined: 11/19/2018

In reply to by tbates

Sorry for the late response (been quite hectic). That makes sense. I was wondering, if you have the case in which it doesn't find a solution but it does have valid attempts, would it be acceptable to go with that 'valid attempt' solution? My feeling is that I don't necessarily want to go with a forced converged solution if that is too forced? At least, that's my impression, but maybe it is better to just figure out how to brute-force converge the result?
Replied on Tue, 12/21/2021 - 15:13
Picture of user. tbates Joined: 07/31/2009

In reply to by mirusem

Forced isn't the right word for any solutions tryHard (or openmx in general) finds. And not sure what no solution but valid attempts means. It might be you have a wrongly specified model or a small dataset where widely differing solutions have similar fit.

Typically I would m1 = mxRun(m1) the model again from the last solution to confirm that the solution is reliable.

OpenMx has flexible criteria for giving up, and criteria for accepting a solution as not improvable.

You might want to explore altering (reducing) the value of `mvnRelEps`. e.g.

umx_set_optimization_options("mvnRelEps", .001)

Replied on Tue, 12/21/2021 - 20:02
No user picture. mirusem Joined: 11/19/2018

In reply to by tbates

So by no solution I mean when I get (for example) this result that is being returned:
"Retry limit reached; Best fit=-353.49615 (started at -352.57463) (11 attempt(s): 2 valid, 9 errors)" etc. as posted elsewhere, where it actually does return a set of p-values, etc. but it doesn't say "Solution reached!" or whatever is clearly the case for no errors. Does that change anything you might conclude? I am doing it on quite a few phenotypes, and ultimately, I would say that this only ends up being the case for a very small fraction of them.
Of that fraction subset, mxTryHard resolves all but even more a marginal few, which at that point I would have to really alter the starting values in a random way (which feels forced)--but ultimately even if those don't get the 'solution reached' but rather 'best fit=' there is some value returned, and am thinking since it's so marginal, to just note as such.

It's specifically with the scripts online, so I think it just has to do with those phenotypes, etc.

I might go ahead and try that optimization option, in any case, though!