the result of bivariate moderation model not the same
Attachment | Size |
---|---|
bivariate moderation syntax.R | 10.47 KB |
result1.txt | 3.99 KB |
result2.txt | 4.01 KB |
I am trying running a bivariate moderation model put forward by Purcell with binary moderator(smoke or not) and binary outcome(T2DM or not). I have running the model twice with the same syntax and the same data without any change, However, the results are different, not only the point estimate but also the -2LL . My syntax and results are in the supplementary materials.
I use the "NPSOL" optimizer and mxTryHardOrdinal() with extraTries=30. I also used a custom compute plan with Nelder-Mead advised by professor Robert. "set.seed(100)" is included in the model.
the output of "mxVersion()"
OpenMx version: 2.17.2 [GIT v2.17.2]
R version: R version 3.5.3 (2019-03-11)
Platform: x86_64-w64-mingw32
Default optimizer: CSOLNP
NPSOL-enabled?: Yes
OpenMP-enabled?: No
There is a warning in my result: "The model does not satisfy the first-order optimality conditions to the required accuracy..." From the forums, I know that the warning is quite common for categorical variables.
My biggest puzzle is that the results of the twice running are different, which means the result is unreliable.
I'll be really grateful if anyone can give some suggestions!
2 things
plan <- omxDefaultComputePlan(intervals=TRUE)#<--Note `intervals=TRUE`
plan$steps <- list(
NM=mxComputeNelderMead(centerIniSimplex=TRUE),
GD=plan$steps$GD,CI=plan$steps$CI,ND=plan$steps$ND,SE=plan$steps$SE,RD=plan$steps$RD,RE=plan$steps$RD)#<--Note CI step
ACEmodModel <- mxModel( "ACEmod", modelMZ, modelDZ, funML, multi, Confmod, Conface, plan) #<--N.B.
set.seed(100)
ACEmodFit <- mxTryHardOrdinal(ACEmodModel,extraTries = 30,intervals=TRUE) #<--No plan
. Notice also what I had to change to get a confidence-interval step in your custom compute plan.
Second, there's a simple explanation for why you're getting different results from your two different runs. As you know, `mxTryHard*()` randomly perturbs the start values between fit attempts. `set.seed()` does NOT ensure that every call to `mxTryHard*()` will produce the same results. Rather, it merely initializes the sequence of pseudorandom numbers in a way that's reproducible. What your script is currently doing is setting the RNG seed, generating a sequence of pseudorandom numbers during the first call to `mxTryHardOrdinal()`, and then **continuing the sequence of pseudorandom numbers in the second call to `mxTryHardOrdinal()`.** If you want your second call to give the same results as your first call, you need to do `set.seed(100)` again before your second call.
Log in or register to post comments
one other thing
mxOption(NULL,"Feasibility tolerance",0.001)
.Log in or register to post comments