excessive memory usage when definition variables vary substantially

Posted on
No user picture. CharlesD Joined: 04/30/2013
Hi. I've been doing some work on a package to overlay openmx and manage continuous time SEM models. Everything is working great but I am quite stuck with what seems to be some sort of memory management issue with openmx. I've spoken with Tim Brick regarding this previously, but thought I'd put it up here for all to see :)

Basically I'm estimating n-variate vector autoregressive models, and constraining the discrete observations to an underlying continuous time model with various algebra constraints and definition variables. See http://psycnet.apa.org/index.cfm?fa=search.displayRecord&id=7833EC1B-FEBA-3F1B-2A07-5321A5AE363A&resultID=4&page=1&dbTab=all&search=true for more details.

When each individual shares the same pattern of definition variables, ie all individuals are measured at the same time for each wave, things are fine. However as soon as individuals vary in their measurement timings, memory usage skyrockets and in many cases I'm unable to complete mxRun without R memory errors.

2 data files and an openmx script, reflecting a bivariate, 5 time point case are available here:
https://www.dropbox.com/sh/c70cc0c6ghqefbi/F_HiRsutpU

data1 is rounded such that individuals share the same time intervals (seen in variables i1 to i4)
data2 shows individually varying time intervals
the openmx script runs very quickly when data1 is used, but crashes out on me when data2 is used.

Cheers for any help!

Replied on Thu, 01/09/2014 - 14:06
Picture of user. mhunter Joined: 07/31/2009

I'm not sure about the differences in memory usage, but one thing that might make a lot of this easier is using OpenMx's built-in matrix exponential.


?mxAlgebra

It's called 'omxExponential'. From your code it looked like a lot of work was being done to get the eigenvalues and eigenvectors of the DRIFT matrix just so you could get the matrix exponential of it. Using omxExponential should require far less code, be more computationally accurate, be faster to execute, and use less memory.

Cheers,
Mike Hunter

Replied on Thu, 01/09/2014 - 15:33
No user picture. CharlesD Joined: 04/30/2013

In reply to by mhunter

Thanks Mike - I came across omxExponential at some point and tried to make use of it, but couldn't get sensible results from it and could find no documentation on it. Have you used it yourself or can at least confirm it definitely works? I'll try it again either way and either post my problems with it, or a more intelligible script...
Replied on Thu, 01/09/2014 - 23:00
Picture of user. mhunter Joined: 07/31/2009

In reply to by CharlesD

I'm sorry you didn't get sensible results when you tried it before. The omxExponential function should definitely work. We regularly test that it produces the same result as the expm function from the Matrix package. In particular,


A <- mxMatrix(values = runif(25), nrow = 5, ncol = 5, name = 'A')
v w x y z
a 0.98992747 0.09948966 0.001311698 0.3544444 0.3965146
b 0.88295013 0.52858494 0.519290722 0.9884384 0.7651757
c 0.03132507 0.22425116 0.736221879 0.1530352 0.8852226
d 0.60146788 0.54289701 0.753248648 0.7589091 0.9555508
e 0.95679083 0.88014886 0.611096947 0.8427062 0.7980616

expm and omxExponential both give the following as the result.

[,1] [,2] [,3] [,4] [,5]
[1,] 4.474723 1.433815 1.319897 2.226954 2.547891
[2,] 5.850398 4.687663 3.948124 5.216776 5.848153
[3,] 2.707760 2.174611 3.897287 2.564916 3.922219
[4,] 5.439009 3.753324 4.270049 5.961495 6.075932
[5,] 6.484355 4.367193 4.384769 5.554721 7.410508

If you have any example of omxExponential giving a different result from expm, then this is a bug and the OpenMx development team will fix it pronto.

Let us know how the modeling goes!

Replied on Fri, 01/10/2014 - 09:10
No user picture. CharlesD Joined: 04/30/2013

In reply to by mhunter

Ok. omxExponential 'seems' to calculate correctly... however:

When I use the original, overly complex algebras to provide the constraints, everything works fine, the expected parameters are returned, and the omxExponential algebras I have now included (but didn't refer to with any labels in this case) turn out equal to the original algebras. This state can be seen in stabledefs_originalalgebra.R.

When I use the omxExponential algebras as my constraints, all manner of nonconvergence problems occur, the expected parameters are not returned. This state can be seen in stabledefs_omxExponentialalgebra.R.

Memory use still skyrockets when definition variables vary individually, in both original and omxExponential algebra cases. Observable in variabledefs_originalalgebra.R and variabledefs_omxExponentialalgebra.R.

files are here:
https://www.dropbox.com/sh/c70cc0c6ghqefbi/F_HiRsutpU

Replied on Mon, 01/13/2014 - 11:46
Picture of user. mhunter Joined: 07/31/2009

In reply to by CharlesD

I can't replicate your finding. Running the file "testing_stabledefs_omxExponentialalgebra.R" gives me the following error.


> testfit<-mxRun(larModel)
Running lar
Error: The job for model 'lar' exited abnormally with the error message: Expected covariance matrix is not positive-definite in data row 70 at major iteration 0 (minor iteration 0).
> summary(testfit)
Error in summary(testfit) :
error in evaluating the argument 'object' in selecting a method for function 'summary': Error: object 'testfit' not found
> mxEval(EXPd1,testfit)
Error in generateLabelsHelper(model, data.frame()) :
object 'testfit' not found
> mxEval(EXPd1,testfit,compute=T)
Error in generateLabelsHelper(model, data.frame()) :
object 'testfit' not found
> mxEval(intd1,testfit)
Error in generateLabelsHelper(model, data.frame()) :
object 'testfit' not found
> mxEval(intd1,testfit,compute=T)
Error in generateLabelsHelper(model, data.frame()) :
object 'testfit' not found

Replied on Tue, 01/14/2014 - 10:10
No user picture. CharlesD Joined: 04/30/2013

In reply to by mhunter

Thanks for checking! I just tested the file I provided on a recent build of openmx and on 1.32, on mine and a colleagues pc's, and it works ok. Not sure why it doesn't work on yours. The minimal example you provided works fine on mine also - something strange seems to be occurring in the more complex scenario...
Replied on Sat, 04/05/2014 - 11:17
No user picture. CharlesD Joined: 04/30/2013

In reply to by CharlesD

Thought I should update this briefly. omxExponential is generally working ok now. This approach eliminates the unsolvable matrix errors I would previously get with poor starting values, but for whatever reason doesn't converge to sensible values when the overall fit of the model is not so good. In any case, the memory issue remains, also with csolnp. Cases with 10000 individuals, 8 time points and 2 latent processes readily exceed the 128gb memory limit I was requesting...

recreating the memory problem: load and run the variabledefs_originalalgebra.R file from https://www.dropbox.com/sh/c70cc0c6ghqefbi/F_HiRsutpU as well as the data2 file (or just have the latter in your working directory). you should be able to observe the memory usage continually increasing. In contrast, if you change the data to data1 and fit using that, it fits straight away with no problems. This is a just a toy example, but the same issue occurs with many real data sets.

Replied on Mon, 01/13/2014 - 11:50
Picture of user. mhunter Joined: 07/31/2009

In reply to by CharlesD

A minimal example does not replicate your finding. I believe the backend is returning the correct matrix.


require(OpenMx)
require(Matrix)

mod <- mxModel("Test Matrix Exponential",
mxMatrix(values = runif(25), nrow = 5, ncol = 5, name = 'A'),
mxAlgebra(omxExponential(A), name='test1'),
mxAlgebra(omxExponential(t(A)), name='test2')
)

rmod <- mxRun(mod)

omxCheckCloseEnough(rmod$test1@result, as.matrix(expm(mod$A@values)), 0.001) #Result computed in model fitting
omxCheckCloseEnough(rmod$test2@result, as.matrix(expm(t(mod$A@values))), 0.001)
omxCheckCloseEnough(mxEval(test1, rmod), as.matrix(expm(mod$A@values)), 0.001) #Result accessed differently
omxCheckCloseEnough(mxEval(test2, rmod), as.matrix(expm(t(mod$A@values))), 0.001)
omxCheckCloseEnough(mxEval(test1, rmod, compute=TRUE), as.matrix(expm(mod$A@values)), 0.001) #Result re-computed
omxCheckCloseEnough(mxEval(test2, rmod, compute=TRUE), as.matrix(t(expm(mod$A@values))), 0.001)