You are here

Expected covariance matrix is not positive-definite in row 0

16 posts / 0 new
Last post
pdeboeck's picture
Offline
Joined: 08/04/2009 - 15:55
Expected covariance matrix is not positive-definite in row 0

Can anyone tell me about this error?

Error: The job for model 'LDE_Model_1' exited abnormally with the error message: Expected covariance matrix is not positive-definite in row 0.

Odd part: I ran the model more than once. This error came up with one data set, but not another. Seems like different observed data should not change the expected covariance matrix.

mspiegel's picture
Offline
Joined: 07/31/2009 - 15:24
I don't have a lot of

I don't have a lot of experience with this portion of the project, but maybe this advice can help: http://openmx.psyc.virginia.edu/wiki/errors

(Look at your starting values. if you have used starting values 1s, you are building an initial expected covariance that is exactly singular. At the first iteration, OpenMx can't invert the expected covariance matrix, and crashes.
Solution: Try changing your starting values (say, by making all of the covariances and free regressions .5) and see if it runs.)

pdeboeck's picture
Offline
Joined: 08/04/2009 - 15:55
This doesn't apply in my

This doesn't apply in my case, but certainly a good link to point people towards.

pdeboeck's picture
Offline
Joined: 08/04/2009 - 15:55
I think I got it working. The

I think I got it working. The primary difference between the models that were & were not working: damped linear oscillator (pendulum) with negative zeta (did not work) and positive zeta (did work). I was using raw data w/ FIML. I multiplied the observed data by 100, and things seem to fall into place. Seems a little odd to me. Damped linear oscillator data starting at amplitude 1 does not work, at 100 it does work. Does OpenMX not do well with numbers less than 1? Or perhaps there is rounding occurring somewhere?

This still doesn't explain why a problem in my observed covariance matrix was producing an expected covariance matrix error. Even if this is some form of user error, the error produced by OpenMx needs to point towards the observed matrix.

neale's picture
Offline
Joined: 07/31/2009 - 15:14
With FIML, the observed

With FIML, the observed covariance matrix is not used. The likelihood is calculated for each vector separately. Now, it is possible that the expected covariance matrix iterates towards being non-positive definite and that this is what generates the error. In fact this is almost certainly going to happen if the number of data vectors (nvecs) is fewer than the number of variables (nvars) in the model. Sometimes it is possible to configure the model for the expected covariance matrix such that it never goes non-pd; a Cholesky decomposition with diagonal elements with lower bounds a bit above zero would be such an example. Typically optimization with a nvecs less than nvars case would finish at a point where at least one of the diagonal elements is on its lower bound, near zero.

pdeboeck's picture
Offline
Joined: 08/04/2009 - 15:55
Sorry, I misspoke. I meant

Sorry, I misspoke. I meant that it does not seem like a problem with my observed data matrix should produce an error regarding the expected covariance matrix. The data consisted of an embedded matrix with 5 variables and 196 rows.

Ryne's picture
Offline
Joined: 07/31/2009 - 15:12
I'd rethink your starting

I'd rethink your starting values, not because they are initially bad, but because they lead down a determined path that leads to a non-invertible expected covariance for some row. The data change thus changes the optimization history at some iteration. If you specify starting values of opposite sign as the final values (as may happen in your example), you could hit more problems.

pdeboeck's picture
Offline
Joined: 08/04/2009 - 15:55
I really seems like more than

I really seems like more than just a starting values problem, but I have collected up some code if you want to give it a try (below). The code references the attached data. In this code, the correct model is being fit to the data. Even starting at the correct values (known because I generated the data), the same error pattern is produced. If you comment/uncomment the line "#Data <- Data * 100" you'll see the error come and go. Note that the initial variances of the latent variables and error variances are automatically adjusted to reasonable starting values.

    rm(list=ls())
    library(OpenMx)

    Data &lt;- dget("Data.R")
    #Data &lt;- Data * 100
    manifestVars &lt;- paste("x",c(1:5),sep="")
    dimnames(Data) &lt;- list(NULL,manifestVars)
    toAnalyze &lt;- mxData(Data,type="raw", numObs=dim(toAnalyze)[1])
    varest &lt;- var(Data[,3])

    fit &lt;- mxRun(mxModel("Model1",toAnalyze,

mxMatrix("Full",free=FALSE, values=cbind(c(1,1,1,1,1),c(-2,-1,0,1,2),c(2,.5,0,.5,2)),name="L"),
mxMatrix("Diag", 5, 5, values=varest.3, free=TRUE, name="U", lbound=0),
mxMatrix("Full",free=TRUE,nrow=1,ncol=5,values=0,name="Means",
dimnames=list(NULL,manifestVars)),
mxMatrix("Symm", 3, 3,
values=c(varest
.7,0, 0,0,varest.07, 0,0, 0,varest.007),
free=c( TRUE, TRUE,FALSE,TRUE, TRUE,FALSE,FALSE,FALSE, TRUE),
lbound=c(0,NA, NA,NA,0, NA,NA,NA,0),name="S", byrow=TRUE),
mxMatrix("Full", 3, 3, values=c( 0, 0, 0,0, 0, 0,-.1,-.05, 0),
free=c(FALSE,FALSE,FALSE,FALSE,FALSE,FALSE,TRUE, TRUE,FALSE),
name="A", byrow=TRUE),
mxMatrix("Iden", 3, name="I"),mxFIMLObjective("R","Means"),
mxAlgebra(L %% solve(I-A) %% S %% t(solve(I-A)) %% t(L) + U,
name="R", dimnames = list(manifestVars,manifestVars))))

pdeboeck's picture
Offline
Joined: 08/04/2009 - 15:55
I guess the part that I find

I guess the part that I find bothersome is that I don't remember the old Mx being this touchy about starting values. And it certainly seems like a model should not have problems converging when you start it very close to the true values.

Steve's picture
Offline
Joined: 07/30/2009 - 14:03
I suspect that this is a

I suspect that this is a sensitivity issue to do with a mismatch between the range of the second derivatives and displacements. The multiply by 100 fix is a tip off that it could be machine precision problems. We are using different numerical libraries than old Mx and that could be the source of the difference.

What does

summary(Data^2)

report? If there is a big discrepency between the max of one column and the min of another column, this could lead to a noninvertable row.

pdeboeck's picture
Offline
Joined: 08/04/2009 - 15:55
> summary(Data^2) x1

> summary(Data^2)
x1
Min. :1.783e-06
1st Qu.:6.146e-03
Median :2.382e-02
Mean :7.545e-02
3rd Qu.:6.049e-02
Max. :1.394e+00
x2
Min. :1.783e-06
1st Qu.:6.146e-03
Median :2.294e-02
Mean :7.267e-02
3rd Qu.:5.879e-02
Max. :1.394e+00
x3
Min. :1.783e-06
1st Qu.:5.656e-03
Median :2.250e-02
Mean :7.107e-02
3rd Qu.:5.752e-02
Max. :1.394e+00
x4
Min. :1.783e-06
1st Qu.:5.268e-03
Median :2.250e-02
Mean :7.104e-02
3rd Qu.:5.752e-02
Max. :1.394e+00
x5
Min. :1.783e-06
1st Qu.:5.029e-03
Median :2.190e-02
Mean :6.983e-02
3rd Qu.:5.686e-02
Max. :1.394e+00

pdeboeck's picture
Offline
Joined: 08/04/2009 - 15:55
Max are Min values are the

Max are Min values are the same across variables, as this is an embedded matrix. Range of values is 0 to 1.39.

What are your thoughts on this?

neale's picture
Offline
Joined: 07/31/2009 - 15:14
I'd be a bit surprised if Mx1

I'd be a bit surprised if Mx1 didn't also have a problem with this dataset. The variance of each variable is really pretty small (around .07) and the optimizer has to skirt around possible trial values that would give a variance of zero. The likelihood of any observed data point that is not equal to the mean is zero when the model predicts the variance to be zero. The log-likelihood is negative infinity, which is an uncomfortable quantity to work with when calculating derivatives etc. Accordingly, I don't find it unreasonable for the estimation to get stuck. That the software doesn't provide a warning message that tells the user that this may be the problem is, however, a bit unreasonable, and that's my bad :(.

pdeboeck's picture
Offline
Joined: 08/04/2009 - 15:55
Any chance of getting

Any chance of getting warnings from Mx 1? Those often were informative & helpful.

Steve's picture
Offline
Joined: 07/30/2009 - 14:03
I think you are looking

I think you are looking for

tModel <- mxRun(factorModel)

tModel@output$status

mspiegel's picture
Offline
Joined: 07/31/2009 - 15:24
Well technically you probably

Well technically you probably want something that looks like this:

    if (output$status[[1]] > 0) {
        npsolWarnings(flatModel@name, output$status[[1]])
    } else if (output$status[[1]] < 0) {
        stop(paste("The job for model", omxQuotes(flatModel@name),
            "exited abnormally with the error message:",
            output$status[[3]]), call. = FALSE)
    }

This code is taken from the end of mxRun(), but if you happen to turn off warning messages then you won't see the output from npsolWarnings().