You are here

the result of bivariate moderation model not the same

3 posts / 0 new
Last post
Chen SJ's picture
Offline
Joined: 04/02/2020 - 01:08
the result of bivariate moderation model not the same
AttachmentSize
Binary Data bivariate moderation syntax.R10.47 KB
Plain text icon result1.txt3.99 KB
Plain text icon result2.txt4.01 KB

Hello everyone!

I am trying running a bivariate moderation model put forward by Purcell with binary moderator(smoke or not) and binary outcome(T2DM or not). I have running the model twice with the same syntax and the same data without any change, However, the results are different, not only the point estimate but also the -2LL . My syntax and results are in the supplementary materials.

I use the "NPSOL" optimizer and mxTryHardOrdinal() with extraTries=30. I also used a custom compute plan with Nelder-Mead advised by professor Robert. "set.seed(100)" is included in the model.
the output of "mxVersion()"
OpenMx version: 2.17.2 [GIT v2.17.2]
R version: R version 3.5.3 (2019-03-11)
Platform: x86_64-w64-mingw32
Default optimizer: CSOLNP
NPSOL-enabled?: Yes
OpenMP-enabled?: No

There is a warning in my result: "The model does not satisfy the first-order optimality conditions to the required accuracy..." From the forums, I know that the warning is quite common for categorical variables.

My biggest puzzle is that the results of the twice running are different, which means the result is unreliable.

I'll be really grateful if anyone can give some suggestions!

AdminRobK's picture
Offline
Joined: 01/24/2014 - 12:15
2 things

Two things. First, you're not using your custom compute plan correctly. It needs to go into your container MxModel, e.g.,

plan <- omxDefaultComputePlan(intervals=TRUE)#<--Note `intervals=TRUE`
plan$steps <- list(
    NM=mxComputeNelderMead(centerIniSimplex=TRUE),
    GD=plan$steps$GD,CI=plan$steps$CI,ND=plan$steps$ND,SE=plan$steps$SE,RD=plan$steps$RD,RE=plan$steps$RD)#<--Note CI step
ACEmodModel  <- mxModel( "ACEmod", modelMZ, modelDZ, funML, multi, Confmod, Conface, plan) #<--N.B.
 
set.seed(100)
ACEmodFit  <-   mxTryHardOrdinal(ACEmodModel,extraTries = 30,intervals=TRUE) #<--No plan

. Notice also what I had to change to get a confidence-interval step in your custom compute plan.

Second, there's a simple explanation for why you're getting different results from your two different runs. As you know, mxTryHard*() randomly perturbs the start values between fit attempts. set.seed() does NOT ensure that every call to mxTryHard*() will produce the same results. Rather, it merely initializes the sequence of pseudorandom numbers in a way that's reproducible. What your script is currently doing is setting the RNG seed, generating a sequence of pseudorandom numbers during the first call to mxTryHardOrdinal(), and then continuing the sequence of pseudorandom numbers in the second call to mxTryHardOrdinal(). If you want your second call to give the same results as your first call, you need to do set.seed(100) again before your second call.

AdminRobK's picture
Offline
Joined: 01/24/2014 - 12:15
one other thing

One other thing: since you're using both Nelder-Mead and NPSOL to fit a model that contains equality MxConstraints, you might want to lower the feasibility tolerance from its on-load default, with mxOption(NULL,"Feasibility tolerance",0.001).