Attachment | Size |
---|---|

OptimizationIssuesOpenMx.R | 2.82 KB |

Hey,

I work with OpenMx using a bit different data than most others; often data comes from a full population and has quite many rows (up to 3 million). A common type of analysis is for relatives with one or more binary variables, e.g. observed disease diagnosis, where the prevalence is low, e.g. 1% to 0.05%. The complexity of the models vary from simple 2x2 covariance matrices without any definition variables to 8x8 covariance matrices with several definition variables adjusting the means/thresholds.

I've been doing this since OpenMx 1.X and has throughout encountered optimization issues. When I do the analyses myself I usually can handle these issues by varying starting values and changing some options for the optimizer, mainly in "mxOption( NULL , 'Line search tolerance' , .4 )", and compare resulting expected prevalences and covariances/correlations with non-modelled ones as well as looking at the likelihood values to ensure a global optimum (even when there's warnings from the optimizer). But I'm no computer scientist, and don't really know how to tweak the optimizer to handle my issues.

Now I'm teaching a lot of student who want to use similar data, and since I don't have a solution for making the optimization work "all the time" I'm asking for some help. I am hoping for some help in general ways to make the optimization work for this type of data, and possibly even more complex data.

Unfurtunately I can't share data because of etichal issues, but I attach some generated data with optimization issues. It's inded a very simple model, and I cannot see why this would be a problem with regards to optimization.

Also, the issue is not dependent on whether I use NPSOL or SLSQP, it may appear in both, or one but not the other (I usually vary this to ensure global maximum as well).

I'm currently running on a Linux server, software info where I have this issues below (but they are on other software/hardware as well)

R version 3.3.1 (2016-06-21)

Platform: x86_64-redhat-linux-gnu (64-bit)

Running under: Red Hat Enterprise Linux

OpenMx_2.6.9

If all of your data is ordinal then you might consider using item factor analysis,

https://journal.r-project.org/archive/2016-1/pritikin-schmidt.pdf

Hey All,

Thanks for quick feedback. Although, perhaps I wasn't clear in my original post. Clarifications:

The attached code is just examplifying where my intuition about optimization fails me; the model is so easy that I can't see why it has problems.

The solution is not for me, rather for me to use for the students who has had maybe one week course in classic twin modelling and want to use standard methodology for this type of data (i.e., many observations, >1million). The students aren't typically interested in what's going on under the hood, and since they often have a non-technical background they are mainly focused on the statistical theory - expected covariance matrices etc. If I can't help them with this, I will eventually have to double check their solutions anyway - not very helpful for them, and time consuming for me - and I don't know what simple hints to give them to solve the issues themselves. (I have multiple ways of checking that a solution is correct, including leaving OpenMx for R basic functions :( )

Often at least one variable per individual is binary (i.e., two if pairs are analyzed), but there may also be continuous variables, or ordinal with many levels.

I thought that the issue might have been with optimization well suited for smaller data, perhaps problems where "seeing" the likelihood as very flat around maximum in bigger data?

So, if I get your answer right, a potential way forward is to.

A. If non-missingness in data, use wls.

B. Use mxTryHardOrdinal() rather than mxTryHard() if only ordinal.

C. Check if CSOLNP helps.

Finally, if I get you right Mike, the issue is most likely with imprecision in numerical integration due to time-saving? If so, it should be possible to increase this precision? How to do that? (Most students have access to server so they can use multiple [~10] cores if needed).

Again, thanks for input.

We have increased the numerical precision of the integration until it really began to climb the quadratic (or exponentially steeper) curve of cpu time against precision. It didn't seem to help a whole lot. We are looking at other numerical integration routines. In your case, the problem is especially difficult because the probabilities can be very small (which means that during optimization a small change in parameters may generate very large changes in the fit function in the smaller probability direction, but much less change in the other direction).

Your ABC's look right, except that A may be a larger category than you think. If the missingness is completely at random (MCAR - see https://en.wikipedia.org/wiki/Missing_data), WLS is still fine.

We make life difficult for the optimizer with FIML ordinal data analysis, because the numerical integration required is somewhat inaccurate. Full accuracy would take ages to evaluate. As a result we are walking a tightrope between a good enough integration precision for optimization to work a reasonable proportion of the time. The function mxTryHard() can be useful to validate solutions. Nevertheless, I feel your pain and you should know that the development team have been working directly on this problem for some time. The CSOLNP optimizer seems to work better than the others in certain test cases, but not all. We have worked with altering numerical precision of the integration and are preparing a publication on those results. However, the false positive rate of "code red" or IFAIL=6 is still too high. In my opinion, rather like pain and inflammation it is better to have the alarm system tuned to many more false positives than false negatives (which carry greater risk of publishing incorrect results).

Another approach you might try is to use WLS, which is not widely touted in the documentation yet, but which can work very efficiently. There are limitations -- it is known to be biased when data are missing at random (MAR) and one does need a decent sample size. In addition it isn't straightforward to specify moderator models such as those that moderate paths from genotype to phenotype (multi group approaches can be used as a partial solution). You can find a few WLS examples in the inst/models/passing directory of the repository, i.e., https://github.com/OpenMx/OpenMx/tree/master/inst/models/passing -- those with WLS in their name are ContinuousOnlyWLSTest.R IntroSEM-OneFactorCov.R MultipleGroupWLS.R and SaturatedWLSTest.R. I hope that this helps - do let us know how you get on!

I notice that you have a line commented out in your attached script,

`#simpModFit <- mxTryHard( simpMod , intervals=F )`

. There is a wrapper to`mxTryHard()`

called`mxTryHardOrdinal()`

, which has its default arguments specifically oriented toward analyses of ordinal data.As you probably know,

`mxTryHard()`

randomly perturbs start values between attempted model fits. Your script is parameterized in terms of tetrachoric correlations and thresholds, so randomly perturbed start values are likely to cause the tetrachoric correlation matrix to be non-positive-definite at the start of a fit attempt, or even to have off-diagonals outside the interval (-1,1). You could place`lbound`

s of zero and`ubound`

s of 0.99 on the tetrachoric correlations. I think the lower bound of zero is reasonable for your purposes, since it sounds like your tetrachoric correlations are interpretable as familial resemblance for disease risk.Better yet, re-parameterize the correlation matrix so that the relevant parameters can take values on the whole real line, and use an MxConstraint to ensure a unit diagonal. Something like this:

I tried this on my Linux laptop just now with SLSQP, and it was able to achieve a fitfunction value of 61877.31, with no status warning, and the MxConstraint satisfied well within feasibility tolerance. NPSOL did not fare so well--its best solution had Status Red, and was not even in the feasible space.

Edit: Actually, include this in the MxModel as well:

Start values from best fit:

1, 0.748939348388329, 0.475524572263707, 0.147689759804613, 0.664647948772721, -0.324824935293478, 0.163502258576246, 0.819050224907451, 0.970724573486396, 0.0972730693883954, 3.08196654269581, 3.21380410569994, 3.08578293382576, 3.19111179271254

iloo, thank you for starting this thread. And yes, your post was clear. My prior replies in this thread were only meant to illustrate how you could make an MxModel more amenable to randomized start values with

`mxTryHardOrdinal()`

. I feel like this thread hasn't been resolved satisfactorily, so I wanted to address that.The example model you posted is indeed quite a simple model, but it poses a surprisingly difficult optimization problem. Since November, I've been trying a number of strategies to get your model to reach a solution meeting criteria for a local minimum. I have concluded that the difficulty is likely an algorithmic limitation. Specifically, it appears to be inherently difficult to use finite-differences gradient-based optimization with likelihood functions that are defined in terms of the multivariate-normal (

andmultivariate-t!) probability integral, applied to low-prevalence dichotomous variables.There is numerical inaccuracy, i.e. systematic error, in the algorithms that evaluate the probability integral, and the magnitude of the error, at least relative to the magnitude of the probability, is larger in the tails of the distribution. It's possible for the error to overwhelm the reliable component when taking derivatives by finite-differences.

So, here's what I've tried, to no avail...

`mxTryHardOrdinal()`

, at each different value of the option.`pmvnorm()`

from the 'mvtnorm' package. Genz-Bretz is a stochastic algorithm, and allows the user to specify an upper limit ('abseps') of the estimated numerical error of the probability. At its default abseps, Genz-Bretz performed no better than OpenMx's probability-integral algorithm, SADMVN; at an abseps better suited to this problem, Genz-Bretz would fail to satisfy it. The Miwa algorithm, even at a large value of its 'steps' parameter, fared no better (and just got slower).`optim()`

(from the base-R 'stats' package), SLSQP (via package 'nloptr'), and SOLNP (via package 'Rsolnp')--had as much difficulty as OpenMx. I also tried derivative-free optimizers. Two such optimizers--method "SANN" and "Nelder-Mead" from`optim()`

--did not reach solutions with positive-definite Hessians, and a third--BOBYQA, from nloptr--reached a minimum not low enough. I do note, however, that Nelder-Mead could reach a lower fitfunction value with one restart than OpenMx typically reaches after 11 attempts of`mxTryHard()`

.`log()`

. This didn't help. I also wrote a similar R function that provided the fit value in the same manner, and also, when calculating gradients, tried to use the smallest possible 'abseps' that SADMVN could meet, and the smallest possible gradient interval that produced a nonzero gradient element. Since MxFitFunctionR doesn't support user-supplied derivatives, I tried optimizing this function with BFGS and with SLSQP (via the 'nloptr' package). The solutions did not appear to be a local minimum.tdistribution, instead of the multivariate normal, using an R frontend to SADMVT from the 'mnormnt' package. The degrees-of-freedom of the multivariatethad to be fixed, in order to fix the scale of the latent continua. Even at values implying a good deal of kurtosis (between 4 and 10), this approach didn't really help matters.From these efforts, I conclude that the issue is unlikely to be a shortcoming in OpenMx's gradient-descent optimizers, its finite-difference gradient evaluation, or its multivariate-normal integration algorithm. Instead, the numerical error in evaluating the multivariate-normal probability integral out in the distribution's tails is sufficiently large that finite-difference differentiation is unreliable. Thus, for this model, OpenMx will never be sure it has found a local minimum. This issue primarily underscores a need for OpenMx to incorporate derivative-free optimizers. It would also be nice to have a different integration algorithm, either being faster (to facilitate brute-force with

`mxTryHardOrdinal()`

), or obviously, being more accurate--if such exists?For workarounds, I suggest the following:

1. Use WLS. I believe the only change you would need to make would be to replace

`mxData()`

with`mxDataWLS()`

, and replace`mxFitFunctionML()`

with`mxFitFunctionWLS()`

. For a saturated model like this one, it should give results very close to what you get from the 'polycor' package.2. If you use ML, set up your model to be amenable to randomized start values. For instance, incorporate precautions to keep the correlation matrix positive-definite, and to keep the thresholds in order. If that involves any MxConstraints, use SLSQP as the optimizer. Then, brute-force the problem with

`mxTryHardOrdinal()`

, with argument 'extraTries' anywhere from 100 to 500, and let it run overnight. Accept the best result as the MLE, but ignore the standard errrors.3. Radically re-parameterize your model. For example, I've attached a script that uses an MxFitFunctionR to model the four binary disease variables as correlated Bernoulli trials, specifically,

via probit regression. This approach could be extended to include interactions among the conditioned-upon variables, and other covariates, in the probit regressions. It could probably also be implemented using an MxFitFunctionRow. It could probably also be adapted to handle missing data, by creating a different group for each missing-data pattern and re-defining the fitfunction accordingly, but multiple imputation might be a better way to deal with missing data with this approach.

P.S. the attachment

Hey,

Thank a lot for all the effort you've put in!

I have run with WLS, which works very well with regards to finding the fit of model given parameters :) ! Although the non-ML model-fitting makes precision of derived estimates (functions of model parameters) a bit trickier to get. I've solved this by finding analytical standard errors using the delta method... But a numerical solution would be nice, i.e. the numerical standard errors for mxAlgebra-objects - does such solution exist?

Sounds reasonable, similar to my previous solutions, but using better approach (mxTryHardOrdinal) and more tries.

I will look at you code to see if I understand it :).

WLS will be more completely implemented in version 2.7. v2.7 will also include

`mxSE()`

, which can calculate SEs for arbitrary elements of MxAlgebras. We expect to announce the release of v2.7 very soon.I just wanted to post a follow-up to say that, as of v2.7.9, OpenMx has a built-in derivative-free optimizer. It's a flexible, options-rich implementation of the Nelder-Mead algorithm that I wrote from scratch in the spring. Unfortunately, it hasn't turned out to be a magic bullet that makes optimization with ordinal data work "all the time," but it is sometimes able to reach solutions with status code zero in cases where none of the three gradient-descent optimizers are able to do so (as in one script in our nightly test suite).

Using our Nelder-Mead optimizer requires a custom compute plan, for instance,

(with non-default arguments to

`mxComputeNelderMead()`

if wanted), and include`plan`

in your`mxModel()`

statement.Just a follow-up... Since my previous post, I've gained deeper insight into the Nelder-Mead algorithm as well as this optimization problem. I'm attaching a script that first fits the model using CSOLNP, plus

`mxTryHardOrdinal()`

with 30 extra tries, and a smaller-than-default`scale`

parameter for the distribution of the random perturbations. Then, I tried again, but this time using a deliberately thought-out MxComputeNelderMead step in place of gradient-based optimization; the RNG seed and the arguments to`mxTryHardOrdinal()`

were the same as with CSOLNP. Nelder-Mead does indeed reach a smaller fitfunction value than CSOLNP (60879.58 vs. 61179.33), but nonetheless cannot overcome status red.Edit:

I would guess that the main issue here is numerical precision of integration. Even bivariate normal way out in the tails may be nearly impossible to estimate accurately enough to overcome code Red. Even simple ML estimation of the polychoric would be difficult, I think. I suppose we could increase the prevalence in simulation and see if it works any better.