You are here

Optimization parameters and validity of SEs

9 posts / 0 new
Last post
saralonn's picture
Offline
Joined: 12/11/2012 - 09:04
Optimization parameters and validity of SEs

I am using the newly released beta version (2.0) to run a multivariate twin analysis and have some questions. Due to rather low prevalence (but high number of observations, so have good power anyway) I am having some numerical issues but theses seems to be handled by using the options mvnMaxPointsA/B/C and mvnAbsEps in mxOption. However, I am curious of what these actually do, and cannot find it in the documentation. I am assuming the mvnMaxPointsA/B/C adjust the number of integration points in the numerical integration of the multivariate normal density. But what about mvnAbsEps (or mvnRelEps for that matter), I do not understand how these tolerance conditions works in the numerical integration, for example could I expect any problems if setting them too low, except time that is?

Further, what does the “Function precision” option actually do? All I can find as an explanation is: “a measure of accuracy with which f and c can be computed” but I do not understand the implications of this and cannot find out what f and c is. Is f the logL?
And finally, the obtained SEs are rather small (10-4), but the CIs are broad. I am aware that the CIs are likelihood based and cannot be directly compared with the SE, but they are both measure of precision in the estimates and when the SEs indicates the estimates are rather accurate but the CI gives you another picture it makes you start to wonder. I am thinking we can trust the CI more if the SEs are derived from calculated Hessian, is that correct? If I have a reason to believe Hessian is not valid is there any option that could be used to provide a more valid SEs?

Sara

neale's picture
Offline
Joined: 07/31/2009 - 15:14
Re-download the Beta

A second beta has been released, so the getOpenMxBeta.R source() call will get you an updated version. The new version fixes a serious bug with mixed ordinal and continuous analyses which occurs when there are definition variables in use. Also, some issues with CSOLNP have been resolved, and this now performs better than NPSOL, in terms of getting to the solution the first time, especially when there are ordinal variables.

The default options for numerical precision of the integration of the multivariate normal (as used in FIML ordinal or mixed ordinal/continuous analyses) are as follows:

"mvnMaxPointsA" = 0,
"mvnMaxPointsB" = 0,
"mvnMaxPointsC" = 5000,
"mvnAbsEps" = 1e-3,
"mvnRelEps" = 0,

To avoid potential infinite run times, there is a maximum number of points of integration that can be set. It is controlled as a function of the number of variables, being mvnMaxPointsA + mvnMaxPointsB * nvar + mvnMaxPointsC * nvar^2

If this ceiling is hit, the requested precision will not be achieved. The mvnAbsEps and mvnRelEps control the absolute and relative precision of the integral.

The function precision is essentially the number of of significant digits of the -2lnL (or whatever fit function is being used) that should be sought by the optimizer. It won't always achieve this if the function is not very precise, such as the case with numerical integration. I have found more reliable optimization performance with the following options:

mxOption(NULL, 'mvnAbsEps', 1.e-6)
mxOption(NULL, "mvnMaxPointsC", 500000 )

CI's are typically more accurate than the SE's, due to numerical precision difficulties when analytic gradients are not available. Analytic gradients (and second derivatives) are not available for models specified with matrix algebras. They may be available for some RAM models with covariance matrix input, and we hope to expand this functionality in future.

If you have many CI's of interest, bootstrapping can be an efficient alternative.

saralonn's picture
Offline
Joined: 12/11/2012 - 09:04
Thank you for the

Thank you for the clarifications. Will try updating and bootstrap CIs.
Sara

neale's picture
Offline
Joined: 07/31/2009 - 15:14
Another thing

I have found that for some problems changing the integration parameters as I suggested does nothing other than make the job run very slowly.

Also, I have been talking to Hermine about the Standard Error / Confidence Interval problems - she will be in touch directly.

saralonn's picture
Offline
Joined: 12/11/2012 - 09:04
Thanks, so have I. Working my

Thanks, so have I. Working my way through the optimization parameters, starting with defaults and stop when the estimates and SEs are consistent. The problems occurs for small probabilities, that is low prevalence in combination with multiple outcomes.

neale's picture
Offline
Joined: 07/31/2009 - 15:14
Try relative precision instead?

In the beta, the precision of numerical integration is controlled by these parameters:
"mvnMaxPointsA" = 0,
"mvnMaxPointsB" = 0,
"mvnMaxPointsC" = 5000,
"mvnAbsEps" = 1e-3,
"mvnRelEps" = 0,

Note that the maximum number of points given for integrating over N dimensional space is:

mvnMaxPointsA + N mvnMaxPointsB + N^2 mvnMaxPointsC

Things will run a lot more slowly if you increase the absolute precision to say 1.e-6 and give a larger number for mvnMaxPointsC, say 50000, but perhaps results would be better. CSOLNP (the default optimizer in the Beta) seems better.

I am thinking that relative precision might work better. To use this, set mvnAbsEps to zero and mvnRelEps to say 1.e-5

Please let me know how your experiments go - it would be good to build a wiki around this difficult issue.

saralonn's picture
Offline
Joined: 12/11/2012 - 09:04
mvnAbsEps = 0 means?

So putting mvnAbsEps to zero does not means that you do not allow for any differences? Cause setting Abs to 0 and loosening up on Rel did not even move away from the starting values while the other way around did.

neale's picture
Offline
Joined: 07/31/2009 - 15:14
mvnAbsEps

If mvnAbsEps is zero, it is ignored. If mvnRelEps is zero, it is ignored. Essentially, the precision criterion is:

Error <= MAX(mvnAbsEps, mvnRelEps*ABS(Integral))

where Error is the estimated error of the integral, Integral is the estimate of the integral, and ABS denotes absolute value. Since you got stuck, I might suggest changing the value of Rel to something smaller. We are experimenting here!

saralonn's picture
Offline
Joined: 12/11/2012 - 09:04
Experimenting with optimazation paramaters so far

So far I mainly worked with mvnAbsEps and the number of points in the integration (mvnMaxPointsC) not so much with the rel option. So far I have 2 experiences that might be worth sharing.

1) The number of integration points seems to be what does most to the time so if two values gives in principle the same estimates, that is same on second decimal on path coefficients, then for deriving of CIs I stick to to less precise integration, otherwise the waiting time explodes.

2) To reduce time and the risk of getting stuck, at starting values or in local maxima (for example when testing for qualitative effects and running from the fitted model were rg=rc=1) allowing for higher values of mvnAbsEps in the first run, say 1e-3, and then when re-running from the fitted model and reduce the tolerance to say 1e-5 has been rather successful. In addition to trying other starting values on the rg/rc components then 1 that is.