You are here

Memory leak(?) when running a RAM model with a constraint.

I found this bug when I was thinking about adding a test to be sure that mxStandardizeRAMpaths() behaves properly when the model contains an mxConstraint statement. The attached R script causes R to lock up and steadily take up more and more memory until it crashes. Sometimes, R displays a message to the console about std::bad_alloc before the window closes, but not always.

The script has some commented-out checkpointing code in it, because I tried running the script with checkpointing to see how far it got during optimization. It never even created the checkpoint file.

I spent about an hour yesterday running this under gdb to see if there was an obvious cause, but I couldn't find one. It looked to me that the program was getting stuck in a loop from which it couldn't break out, and kept allocating more memory every time. Is this the sort of thing valgrind can help diagnose?

BTW, I forget the details, but I know that Windows regulates programs' memory usage differently from Linux and Mac OS, so the results from running this script may vary on other platforms.

Binary Data crashing_RAM_with_constraint.R1.08 KB
Sat, 05/10/2014 - 15:25
Mon, 09/29/2014 - 17:51


ouch, kills R on OS X as well....

NPSOL runs fine so it's a CSOLNP bug. I'll add it to models/failing

This appears to work now. Can anyone confirm that the test I attached in the OP works for them, too?

I ran the script half a dozen times, along with mxStandardizeRAMpaths(), on my Windows build with r3803. It worked fine.