Error in runHelper NLOPT unrecognized error -1
Posted on
falkcarl
Joined: 10/29/2015
Attachment | Size |
---|---|
dat100NormalNA198.dat | 14.23 KB |
TestCI2.pdf | 101.62 KB |
Forums
While running simulations with the development version of OpenMx, I ran into an interesting error:
Error in runHelper(model, frontendStart, intervals, silent, suppressWarnings, :
NLOPT unrecognized error -1; please report to developers
It is somewhat difficult to replicate and it took some time to put together the attached example. As you may see in the attached, I read in the same dataset and fit the same model 10 times, obtaining likelihood-based CIs each time for a single parameter. The error only shows up for some replications.
Thought you all should know!
In the meantime, I have no problem rolling back to a more stable version, provided that the boundAdj argument is available for mxCI (even if in this particular example it's not necessary)
Thank you!
unrecognized error
I just pushed a similar simulation,
inst/models/enormous/wu-neale-2012-lgc.R
This is how I recommend you structure your simulations. Use set.seed(result[rx,'rep']) inside the replications loop to ensure that every replication can be replicated. That way you can point to a specific replication that triggers NLOPT unrecognized error.
As you suspected, NLOPT unrecognized error -1 is only reported by SLSQP. You will not see this error with other optimizers. However, if you are interested in the boundary adjusted CIs then you will need to use SLSQP.
Hm, looking at your script, I think you need to add lbound=0 for the latent factor covariance. But wait, this parameter cannot be less than 0 so the boundary adjustment is not needed. I will email you my draft manuscript to explain better. I suspect you can avoid optimizer failures by estimating the cholesky factor of the correlation matrix. inst/models/passing/bivCorM.R is an example of how to do this.
Log in or register to post comments
In reply to unrecognized error by jpritikin
negative covariance
Log in or register to post comments
In reply to unrecognized error by jpritikin
I see, thank you! I put this
I put this together quickly, and so didn't notice that perhaps I could also put bounds on the factor correlations (-1 to 1 or some such if that's possible).
So, this is a known problem for SLSQP? Do we know how frequently (and under what conditions) this occurs? and why?
I also noticed that the OpenMx version number seems odd to me, even though I got the latest version from GitHub and compiled from source yesterday in order to put this together: 2.6.8.198 [GIT v2.6.8-198-gacf5469]
I thought the most stable release was 2.6.9?
I will have a follow-up question regarding implementation of the underlying algorithm for likelihood-based CIs - in particular whether Wu & Neale is always used (as opposed to the Neale & Miller algorithm), and where to poke around in the source code for its actual implementation. For the most part, I can "trick" OpenMx into giving me what I want for a recent manuscript, but my manual implementation of Wu & Neale fails for trivial cases even if I use Pek & Wu code as a model. But, I still have to debug to ensure I haven't done something incorrectly. Perhaps that issue is best placed in a separate thread anyway.
Log in or register to post comments
In reply to I see, thank you! I put this by falkcarl
Regarding the bounds on
Regarding the version, 2.6.8 and 2.6.9 are almost exactly the same. It outputs the incorrect version because I forgot to rebase on top of 2.6.9 when we did the release. 2.6.8.X is the latest.
"So, this is a known problem for SLSQP? Do we know how frequently (and under what conditions) this occurs? and why?" -- Yes, it is a known problem. It seems to occur when the parameter vector is close to boundary of the feasible set.
The implementation is in ComputeGD.cpp starting around line 310. You can get details about which algorithm was used by using summary(model, verbose=TRUE) after the CIs are run. I provide an example in the draft manuscript I sent over.
Log in or register to post comments
In reply to Regarding the bounds on by jpritikin
Wait, so if one asks for CIs,
Log in or register to post comments
In reply to Wait, so if one asks for CIs, by falkcarl
default
Log in or register to post comments
In reply to default by jpritikin
Is that the case?
I think OpenMx has moved from Neale and Miller (1994) to Wu and Neale (2012). By Wu and Neale (2012) I don't mean the algorithm for a bounded parameter, but the algorithm for a parameter without any bound. The help file for the function mxComputeConfidenceInterval cites Pek and Wu (2015) rather than Wu and Neale (2012). The help file for the function mxCI says "The likelihood-based confidence intervals returned using MxCI are obtained by increasing or decreasing the value of each parameter until the -2 log likelihood of the model increases by an amount corresponding to the requested interval", which is a description of Wu and Neale algorithm.
To be specific, when looking for the CI for a parameter "a" without bounds, does the program:
1) minimize (wrt "a") the function:
F_L= (F+3.84- F_a)^2+a, where F is the -2F_{ML}, and
F_U= (F+3.84- F_a)^2-a,
or
2) maximize and minimize the parameter "a" subject to the constraint that
F_a-F <3.84 ?
Log in or register to post comments
In reply to Is that the case? by wuhao_osu
constraint type
Log in or register to post comments
In reply to constraint type by jpritikin
Did you mean...
My question concerns the two different algorithms that could be used for calculating a CI for a parameter without any artificial bound. They do differ in whether an inequality constraint is imposed, but they are fundamentally different algorithms and minimize different functions. Algorithm 1 (#1 in my last post) was proposed by Neale and Miller (1994), it minimizes the function
(F+3.84- F_a)^2 +/- a
This algorithm does not find the CI as defined by profile likelihood CI. This was acknowledged in their paper.
Algorithm 2 (#2 in my last post) was proposed by Wu and Neale and later reviewed by Pek and Wu. Although in that paper we tackled parameters with bounds, but the algorithm was proposed for parameters with no bound. It maximize and minimize the parameter "a" subject to the constraint that F_a-F <3.84. This gives the profile likelihood CI as it is defined.
When you said "SLSQP prefers an inequality constraint", did you mean Algorithm 2 (b/c algorithm 1 has no constraint at all)? When you said "the other optimizers use a penalty function that is added to the fit", did you mean Algorithm 1?
By the way, SLSQP is the default optimizer. Am I right?
Log in or register to post comments
In reply to Did you mean... by wuhao_osu
algortihms
> When you said "SLSQP prefers an inequality constraint", did you mean Algorithm 2 (b/c algorithm 1 has no constraint at all)? When you said "the other optimizers use a penalty function that is added to the fit", did you mean Algorithm 1?
This is clarified in Pritikin, Rappaport, & Neale (in-press). See Equations 1-4.
> By the way, SLSQP is the default optimizer. Am I right?
It was, but CSOLNP is the default in OpenMx v2.7.4
Log in or register to post comments
In reply to algortihms by jpritikin
Thanks
May I know why SLSQP is no longer the default?
Log in or register to post comments
In reply to Thanks by wuhao_osu
constraints
I think it is not the form of the constraint that matters but whether the constraint is encoded as a penalty function (which will have bias) or as a separate constraint (to a constraint aware optimizer).
> May I know why SLSQP is no longer the default?
Some believe that CSOLNP performs better in most circumstances.
Log in or register to post comments
In reply to constraints by jpritikin
You are absolutely right.
Log in or register to post comments
In reply to You are absolutely right. by wuhao_osu
penalty
Log in or register to post comments
In reply to penalty by AdminRobK
Thanks
Log in or register to post comments
Your code fixed
The reason that your version did not work out is because you did not specify appropriate bounds for the parameters. The search for CI is very delicate and prone to failure. You need to bound the search within some reasonable range. In your model, the parameter cov is now bound between -0.4 and 0.9; two loadings are bound to be positive to avoid sign change. As long as the search does not touch the artificial bounds, it is OK. If one of them is touched, set a slightly more lenient bound.
Here is the revised code.
for (i in 1:100)
{
print(i)
dat <- read.table(file="dat100NormalNA198.dat")
colnames(dat)<-paste("X",1:8,sep="")
data <- mxData( observed=dat, type="raw" )
resVars <- mxPath( from=paste("X",1:8,sep=""), arrows=2,
free=TRUE, values=rep(1,8), labels=paste("e",1:8,sep="") )
latVars <- mxPath( from=c("F1","F2"), arrows=2,connect="unique.pairs",
free=c(FALSE,TRUE,FALSE), values=c(1,0,1),
labels =c("varF1","cov","varF2") )
facLoads1 <- mxPath( from="F1", to=paste("X",1:4,sep=""), arrows=1,
free=rep(TRUE,4), values=rep(1,4),labels =paste("l",1:4,sep="") )
facLoads2 <- mxPath( from="F2", to=paste("X",5:8,sep=""), arrows=1,
free=rep(TRUE,4), values=rep(1,4),
labels =paste("l",5:8,sep="") )
means <- mxPath( from="one", to=c(paste("X",1:8,sep=""),"F1","F2"), arrows=1,
free=c(rep(TRUE,8),FALSE,FALSE), values=c(rep(1,8),0,0),
labels =c(paste("meanx",1:8,sep=""),NA,NA) )
mxpindx<-"cov"
CIModel <- mxModel("CIModel", type="RAM",
manifestVars=paste("X",1:8,sep=""), latentVars=c("F1","F2"),
data, resVars, latVars, facLoads1, facLoads2, means)
CIModel<-omxSetParameters(CIModel,labels = "cov",ubound = .99,lbound = -.4)
CIModel<-omxSetParameters(CIModel,labels = c("l1","l8"),lbound = 0)
alpha<-.05
CI<-mxCI(mxpindx,interval=1-alpha,type="both")
CIFit <- try(mxRun(mxModel(model=CIModel, CI), intervals = TRUE, silent=TRUE))
if(class(CIFit)=="try-error") print(CIFit)
}
Log in or register to post comments