A few months back, I spent a few days banging my head against a wall trying to figure out how to relate the -2LL from a covariance model to that of a FIML model so we could make FIML models with no missing data and no definition variables go much faster. The two -2LLs differed by almost exactly 2*pi*n*nvar (the likelihood of the data, as we termed it), but not exactly, and I could never make the right correction. Greg's post earlier today (http://openmx.psyc.virginia.edu/thread/804) got me thinking about this issue again, and I've discovered that the -2LL for identical models using raw and cov data differ by exactly the likelihood of the data provided the following two things (that I never tried together until now) are done to the covariance model:
-the data covariance matrix is defined WITHOUT the sampling correction (use n, not n-1), and
-the numObs slot in mxData object is set to n+1, essentially setting the numObs correction to n rather than n-1 (alternatively, multiplying the -2LL of the cov model by n/(n-1) has exactly the same effect).
I've included a script that shows this difference. It includes raw data that is turned into four different covariance models (n-1 correction in cov calculation, n-1 for optimization; n-1, n; n, n-1 and n, n). Using n rather than n-1 everywhere makes ML and FIML return likelihood values that differ by exactly log(2*pi)*n*nvar.