Please see the comment in thread http://openmx.psyc.virginia.edu/thread/394#comment-1355 . There is definitely at least a cosmetic bug and quite possibly something more serious. It's critical as this is an example for the workshop next week.
I think I've tracked down the source of the bug, and it appears to be only cosmetic so far, but might have some implications down the line.
Apparently, all the definition variables are being reported as part of both FIML objective functions, probably because of interactions during model flattening. In the back end, each FIML function was making the assumption that all definition variables were using the data object associated with that FIML function. This should not matter to the result, since the values that were being incorrectly populated in each calculation were independent of that calculation, but it's an important thing to have noticed.
I've corrected the back-end in r1118 so that definition variables that don't match the data source of the containing FIML objective aren't populated. Unless there's a reason to avoid it, I'm going to recommend we constrain FIML objective functions to walking through the rows of a single data set. This will prevent a potentially large number of row-mismatch problems.
Hmmm, I am not sure what you mean by "Unless there's a reason to avoid it, I'm going to recommend we constrain FIML objective functions to walking through the rows of a single data set." Can you clarify?
"Unless there's a reason to avoid it, I'm going to recommend we constrain FIML objective functions to walking through the rows of a single data set."
Let's say our model consists of two datasets (data1 and data2), one FIML objective function (objective) that is associated with data1, and two definition variables (data1.variable and data2.variable). Ignore the fact that the model is probably misspecified since we have a definition variable that doesn't belong to an objective function - data2.variable. In the earlier version of the code, the FIML objective function would update both data1.variable and data2.variable upon traversing each row. In the new code, the FIML objective function will update only data1.variable and any other definition variable associated with data1.
WRT the script that is not converging. I can think of at least two possible scenarios. Either the model is somehow misspecified. Comparing the OpenMx script with the classic Mx script might confirm or refute this scenario. The other scenario is that we have a bug lurking somewhere. We don't have a case in the test suite for a model with multiple definition variables. The script consists of two datasets, two FIML objective functions, and four definition variables with two associated to each dataset.
I have reconciled the ClassicMx and OpenMx scripts; they now agree both at their starting values and at their solutions with definition-variable sensitive parameters being estimated. So I think we now have, for the continuous variable case, an example with two definition variables. Model misspecification was the culprit - an issue with the means formula in the ClassicMx version, Means (F*(M+B@P))'|(F*(M+B@Q))' / should be used in place of Means (F*(M+B.P))'|(F*(M+B.P))' / where sex_1 should be in P (1x1 full matrix) and sex_2 should be in Q.
NB I uploaded wrong R script before, so have deleted previous post and reposted here since there seems to be no way to change attachments.
When I run lgctwincontinuousdef.R I get the error "Objective function returned an infinite value." UPDATE: Grr, I forgot I'm supposed to download the file jepq2.txt from the forum post and rename it jepq.txt and then run the model. Now I get classic Mx code RED. Is that still supposed to happen?
Per the Grrr, I agree it would be nice if the forum would allow a broader array of text file extensions than .txt - including .dat .rec .ord and probably a few others, since these have been widely used in classic Mx. Renaming in multiple places in order to meet forum criteria is wasting many of us valuable time.
So, I am not sure whether numerical precision differences between PC & Mac may be generating your error; it is possible though the hardware is these days basically the same intel inside. Then again, I'm running 64bit R and perhaps that is the source of the differences. Summary for me (which agrees with Classic Mx estimates etc) reads:
name matrix row col Estimate Std.Error
1 X 1 1 3.015793e+00 0.14259136
2 X 2 1 -5.456230e-01 0.14985319
3 X 2 2 4.063753e-01 0.53480581
4 Y 1 1 5.243366e-01 0.41169550
5 Y 2 1 6.439734e-01 0.24813741
6 Y 2 2 -1.079681e-05 0.88210912
7 Z 1 1 2.464531e+00 0.15653709
8 Z 2 1 -6.253963e-01 0.16228620
9 Z 2 2 1.442134e+00 0.09830726
10 T 1 1 -6.689126e-05 0.85386279
11 T 2 2 1.837686e+00 0.14188053
12 T 3 3 -2.324344e-07 0.76672559
13 U 1 1 -2.726397e-05 0.42016168
14 U 2 2 -2.209712e-06 0.66439542
15 U 3 3 -1.614567e-05 0.68249244
16 V 1 1 2.615055e+00 0.12047113
17 V 2 2 2.636174e+00 0.09250067
18 V 3 3 1.887472e+00 0.18555776
19 Im Mean 1 1 1.000089e+01 0.12224412
20 Sm Mean 2 1 -6.762807e-01 0.07492828
21 Bi Beta 1 1 5.987794e-01 0.16766228
22 Bs Beta 2 1 8.713218e-01 0.10422347
Observed statistics: 4169
Estimated parameters: 22
Degrees of freedom: 4147
-2 log likelihood: 23722.87
Saturated -2 log likelihood: NA
AIC (Mx): 15428.87
BIC (Mx): -3035.892
frontend time: 1.184104 secs
backend time: 52.67335 secs
independent submodels time: 6.699562e-05 secs
wall clock time: 53.85752 secs
cpu time: 53.85752 secs
openmx version number: 0.2.5-1050
This example is now working correctly.
Automatically closed -- issue fixed for 2 weeks with no activity.
Copyright © 2007-2016 The OpenMx Project