Performance regression of OpenMx for simple CFA model
Posted on
MaximilianStefan
Joined: 12/04/2018
Attachment | Size |
---|---|
obs_cov.rda | 58.95 KB |
lavmod.rda | 301 bytes |
Forums
I specified a simple CFA model (5 factors, 20 Items each), simulated data from it (25 observations for each parameter) and fitted the model in lavaan and OpenMx. Surprisingly, OpenMx takes about 100x longer to fit the model (same starting values in lavaan and OpenMx). I attached an MWE.
The results on my machine are
lavaan: 0.16 seconds/31 iterations
OpenMx: 13.2 seconds/135 iterations
The results on my machine are
lavaan: 0.16 seconds/31 iterations
OpenMx: 13.2 seconds/135 iterations
So 5x more iterations and 100x more time needed. I tried different starting values, but the overall picture does not change. OpenMx also tells me there were 29917 evaluations (fit@output$evaluations), which seems way to high. I assume I did something wrong in specifying the OpenMx model - maybe it is using numeric gradients instead of analytic ones?
A bug probably
Thank you for the post and the working minimal example. At a quick look it seems that the attempts to switch off the Standard Error calculations are not being heeded, since Yes and No result in the same timings. So we are working on it.
Thanks again!
Log in or register to post comments
In reply to A bug probably by AdminNeale
correction
On closer inspection, that turned out not to be so.
Log in or register to post comments
performance
Log in or register to post comments
Thanks for the helpful
Another question, SLSQP related: the OpenMx 2.0 psychometrica Paper it says "the open-source NLopt family of optimizers is now selectable" - does this refer to SLSQP? Is there a way to also use the other optimizers from NLopt? Because in our Julia package, we also provide the possibility to use NLopt, and I observed that LBFGS often is faster.
Log in or register to post comments
more info
Nope. Analytic gradients are only implemented for multivariate normal models with no latent variables, IFA, and a few other cases. We hope to add gradients for RAM in the future.
> does this refer to SLSQP?
Yes, SLSQP is the BFGS optimizer from NLOPT with some fixes. I've tried to submit these fixes back to the NLOPT project, but upstream seems dormant.
> Is there a way to also use the other optimizers from NLopt?
Not without hacking the C++ code. I tried the LBGFS code from NLopt and it didn't work well for many of the models in our test suite. However, I could imagine that it would work well for some models.
Log in or register to post comments
Ah, thats interesting - is
Log in or register to post comments
analytic gradients for RAM
Log in or register to post comments
Thanks!
Log in or register to post comments
sparse matrices.
re sparse matrices, the "Matrix" package supports those, and also seems to transparently support upgrading matrices to sparse as needed. Your rapid implementation sounds intriguing!
i = c(1,3:8); j <- c(2,9,6:10); x <- 7 * (1:7)
A = sparseMatrix(i, j, x = x)) # 8 x 10 "dgCMatrix"
B= matrix(rnorm(80), 10,8)
A %*% B
Log in or register to post comments
In reply to sparse matrices. by tbates
re: Sparse Matrices
Log in or register to post comments
This difference is due to
> fit <- mxAutoStart(fit)
> fit <- mxRun(fit)
Running untitled1 with 210 parameters
> fit@output$iterations
[1] 10
> fit@output$evaluations
[1] 3001
> fit@output$backendTime
Time difference of 3.155973 secs
Log in or register to post comments
In reply to This difference is due to by lf-araujo
I meant, improves... not
Log in or register to post comments
should add autoStart to umxRAM
umx_time("start")
fit = mxAutoStart(model)
umx_time("stop") # auto start costs 1 second
fit = mxRun(fit)
umx_time(fit) # runtime drops to 1.3s (from 13s without autostart)
fit = mxRun(model) # 13s
Log in or register to post comments