CSOLNP is working quite nicely in general, but I have a few circumstances where things go dramatically wrong and crash my machine (hard reboot needed) due to excess memory usage if I don't notice fast enough. This doesn't occur when using npsol. With the earlier memory usage issues ( https://openmx.ssri.psu.edu/thread/2551 ) memory usage increased gradually, but in this case it seems much more sudden.
A problem model can be downloaded from:
https://www.dropbox.com/s/kgpsualdlnaowhs/memprobmodel.RData?dl=0
test <- mxRun(memprobmodel, intervals=T)
edit: I don't know specifically what causes the issue, but I'm making extensive use of algebra, exponential functions, and definition variables.
editedit: problem does still exist even with latest updates (26-8-2014). so far I only experience it when calculating confidence intervals. With the above model, after a few minutes of fitting with memory usage at a couple of hundred mb, it suddenly starts going up very rapidly. The problem occurs on more than 1 pc.
Thanks for your report.
Sorry for the confusion, I've edited the top post to reflect my current understanding, the problem still persists, I just only note it when calculating confidence intervals.
I could not reproduce this fault with OpenMx from SVN 3766, when run on a Mac Pro. No sign of excessive RAM usage (machine has 64G but reported 53G free throughout). For the record, here's the output I got with CSOLNP:
And with NPSOL (which finds a lower minimum, unusual instance of better performance with NPSOL than CSOLNP):
Re-running CSOLNP improves the solution somewhat but it gets stuck again with -2 log likelihood: 1890.516 and no improvement was obtained from a third run. It was quite happy to stick with the estimated parameters from NPSOL though, and return standard errors without NA's:
I cannot reproduce it with CSOLNP either, on 32-bit Windows.
Edit: With revision 3751.
When using R 3.1.0 32-bit on Windows with the OpenMx Beta Binary, I don't get any huge memory usage. Running R and various background processes I'm using 2.24 GB of RAM. Running the example model with intervals=TRUE, it hangs around 2.25 GB for a while and eventually (probably when doing the intervals) it slowly climbs to 2.45 GB. On return after the model is done, everything goes back down to around 2.25 GB. This corresponds to a percent use between 27% and 30%. Nothing out of the ordinary to me. It sounds like I'm not replicating this problem.
Ok. I also don't get the issue with 32 bit R, memory usage remains very low. When I switch back to 64bit, I use all the spare physical memory on my laptop (6gb) and windows 'commits' 16gb of virtual memory to the process (I'm not clear on what that commitment actually means - is it using it or just prepared to use it in some way? This is according to the windows 8 resource monitor)
But, now I'm embarrassed... in the example I posted the confidence intervals are set to an algebra. When I correctly set them to the 'discreteDRIFT' matrix rather than the 'DRIFT' algebra (confusion arose because I've been switching between different parameter sets to work out which optimizes best), things work fine. I'll be surprised, but I won't say it's impossible, if this was the problem in the other cases. I'm impressed that confidence intervals estimate on an algebra in the first place - is that intended?
Yes, that is a fully intended feature which has been present in classic Mx since 1995 and was designed into OpenMx from its earliest days.
I do hope that the memory issues are solved. Running the problem with Valgrind did not reveal any memory leaks. We really appreciate your input - keep the comments coming!
Ok, just confirming that the issue does happen when I set confidence intervals on a free parameter, as I normally would... no example as I didn't catch it before the pc froze. I'll go to 32 bit R for the time being.
On Friday, I was running your memory-problematic model on a 64-bit Windows machine, under a debugger. When I compile without enabling multithreading, I notice that it doesn't memory-hog, but it does hang indefinitely. I'm trying to figure out where it gets stuck.
EDIT: Actually, I can tell from checkpointing that it's not hanging. It's just running a lot more slowly in debug mode than I thought. I also managed to trigger the memory leak on my 32-bit machine by running Charles' model repeatedly with
mxTryHard()
(in build from trunk).Yes, I seem to encounter quite a lot of cases of starting value sensitivity with more complex continuous time models... making me think perhaps a bayesian approach would work better, but I'd love to hear any other suggestions or thoughts for dealing with the issue.
and paste the output into a reply. We are having much difficulty reproducing the error you report, and want to make sure that we are using exactly the same version.
Ok, right, on my machine with more memory the above model also fits, but memory usage does still spike to 6gb or so, which illustrates what seems (to me) to be the problem (or potential improvement), as npsol fits with a steady 100mb or so. Does memory usage not start going up rapidly after a few minutes for you two? I'm surprised it fits on 32 bit windows actually, I would have thought it would definitely hit memory problems. I've been trying to generate a more problematic example but can't at the moment, if I get one that either memory spikes faster, or higher, I'll post it.
> mxVersion()
OpenMx version: 2.0.0.0
R version: R version 3.1.1 (2014-07-10)
Platform: x86_64-w64-mingw32
Default optimiser: CSOLNP
This is with commit 9ce8fba on the master branch, on windows 7 and windows 8 pc's.
Charles
I strongly suspect that this is a bug that has already been fixed, and that you are using an outdated version of the Beta. Your version number looks odd, it does not include a build number on the end, like this: 2.0.0.3766
When you say commit 9ce8fba I am confused (though others on the dev team may not be). Were you building from source? The svn tree is currently at version 3776.
Cheers
Mike
I was also surprised at the version number thing... I have rstudio setup with a project linked via git to the gitorious openmx (which is where I got the commit reference from), and build by telling rstudio to build (after specifying additional 'install' argument to the make command). This has worked ok in the past for getting updates, I can see the recent source code and see a recent change to default summary output wherein the optimizer is reported.
If you could build from the svn repository version, per http://openmx.psyc.virginia.edu/wiki/howto-build-openmx-source-repository then I think the problem will go away. And you'll get a sensible version number.
Cheers
Mike
No change in behaviour, model still goes to 6gb of memory...
OpenMx version: 2.0.0.3777
R version: R version 3.1.1 (2014-07-10)
Platform: x86_64-w64-mingw32
Default optimiser: CSOLNP
Does anybody know if / how I can impose a lower memory limit on 64 bit windows R? memory.limit doesn't want to let me decrease it. If I could do this I assume I would avoid the hard reboots on windows 7 at least (my windows 8 machine has nicer behaviour in this instance - instead of the machine bogging down to the point that I can't kill the task, it just pops a msg box complaining about mem usage).
I'm not sure how to impose a memory limit in Windows, but you'll need to impose a limit on application memory as a whole. OpenMx does not use R's memory in many cases so an R limit is not going to have much of an effect.
Yeah, commit 9ce8fba is a GIT version number. We still use SVN as the definitive source code repository so we need a SVN build number.
I'm getting the same behavior on Windows 7 64-bit machine running R 3.1 64-bit on the OpenMx Binary. It looks like when confidence intervals start, memory usage quickly linearly increases to 100% RAM. Interestingly, the same machine running the same OpenMx on 32-bit R shows no problem.
I should have tried to reproduce the problem on the 64-bit Windows machine in my office last week before I left for the long weekend... Anyhow, I just ran Charles' memprobmodel2, with
intervals=T
, and R's memory usage began to climb ceaselessly, as he described. So, it appears to be something specific to confidence intervals, with CSOLNP, under 64-bit Windows.FWIW:
Charles, I take it you are building OpenMx from source on your machine, correct? Which compiler are you using? Do you use the Rtools toolchain?
Yes, building from source, using rtools.