Hi all,

I am running a bivariate moderation model where the phenotype of interest if dichotomous and the moderator is continuous (centered and scaled, but highly skewed). The estimates that I am getting are way off and don't match to the analysis without the moderation, this is in particular true for the correlation estimates. That is, when I compare estimates at the zero level of moderator with the estimates when there is no moderation in the model, they are very different. For example, rE=-0.27 at M=0 and rE=0.06 if moderation is not modelled; rP=-0.21 at M=0 and rP=0.17 if no moderation (also 0.17 observed from the data).

I have three moderators that we are interested in and such discrepancies are observed for two of them.

What could be a reason for that? I'm a bit lost and don't know how to proceed further, whether I can trust the results of the moderation model or not.

A bit of background: I am running ADE model based on the previous results, but CI for D are very wide and D could be dropped out of the model without significant deterioration of the fit. We decided to keep it in the model for now due to the reviews we got for our previous results (since CI's are large).

For one of the moderators, these estimates are off if D is present, but become consistent with the previous analysis if D is dropped. For the other two moderators, dropping D did not change rE and rP values.

Variance estimates for the main phenotype are consistent and in agreement with the previous analysis if D is dropped out of the model.

I did try both Purcell's bivariate moderation model (based on the Cholesky decomposition) and correlated factor solution with moderation of the paths, and they both consistently give weird estimates for the two of three moderators.

Also, there seem to be no significant moderation of any of the paths.

If anyone has any insight and idea what is going on there and why estimates at M=0 don't match with the estimates when there is no moderation, I would be very grateful!!!

Thank you in advance!

Julia

I wouldn't be able to even guess without at least seeing the script you're working with.

`summary()`

output might help, too.Here are the scripts that I use and the output from full moderation models and models without any moderation. When having lowsupport_s as moderator, rE and rP are not consistent across Purcell moderation model and CF moderation model and are different from the main effects model (and from the observed phenotypic correlation). When removing D, they at least become consistent across Purcell and CF models, but still not equal to the no moderation results (not to the observed rP). And only when removing all the moderation, rP estimated is in agreement with rP observed.

Thank you for looking at it!

In the Purcell model, have you tried removing the lower bounds on the unmoderated terms of the path coefficients? I notice there's an active bound at the solution:

It seems to me that the bounds shouldn't matter, since you mean-centered the moderator, and you identify the liability scale by placing a constraint on the variance at the moderator's zero point. Still, I'm not 100% sure about that. Notice that the point estimate above is actually negative, meaning that the bound is slightly violated. If the bound weren't there, would the optimizer try to push that parameter into the negative region?

Note that you don't need to put

`var_constraint`

into both the MZ and DZ MxModels. It suffices to put it into only one of them.Have you tried using a different optimizer?

Consider replacing

`mxRun()`

with`mxTryHardOrdinal()`

.In the correlated-factors model, why is 'Rd' fixed to -0.7?

I tried to remove lower bounds for the path estimates, and the value went just below zero, but very little:

I also tried another optimizer as you suggested, and SLSQP produced nearly identical results, whereas CSOLNP was just hanging for 10 minutes and I had to terminate the session.

As for rD=-0.7 in the CF model, this was based on the best model in terms of AIC when trying different rD values in the range from -1 to 0 with 0.1 step (previous analysis indicated negative rD).

Also thank you for note about putting the variance constraint just once into the model. I actually think that in some other scripts I had it inside the global model. Don't know why I changed it here.

Did the fit value appreciably improve?

It's encouraging that SLSQP's results agree with NPSOL's. Did you try CSOLNP with the MxConstraint in both the MZ and DZ models? There is a known issue with CSOLNP and redundant equality constraints. In the next OpenMx release, CSOLNP will at least not freeze uninterruptibly when there are redundant equalities.

But why isn't it a free parameter?

Have you tried

`mxTryHardOrdinal()`

?Have you tried mxTryHardOrdinal()?

Yes, I did. The the results are still the same with no improvement in the fit.

The fit value was left unchanged.

Yes, I put the constraint just into the MZ model and tried all the optimizers. Here are the fit indices:

Cholesky moderation (Purcell)

NPSOL: -2LL=12907.98 AIC= -2100.015

SLSQP: -2LL=12907.98, AIC=-2100.016

CSONLP: -2LL=12907.98, AIC=-2100.016

CF moderation

NPSOL: -2LL =12910.81, AIC = -2105.186

SLSQP: -2LL=12910.84, AIC=-2105.156 (Mx Status Red)

CSONLP: -2LL=12910.84, AIC=-2105.156

I thought that rA and rD (just like rA and rC) could not be estimated simultaneously, could they?

Well, it looks as though the optimizers really are finding the solution. I agree that the results seem odd, but I guess there's something wrong with our intuition!

I'm not used to thinking of moderation in terms of the correlated-factors parameterization, so I could be mistaken here, but I don't see any reason why they couldn't be estimated simultaneously. After all, you were able to estimate a cross-path for

D, 'dC', in the Cholesky-parameterized model, right? It should be possible to get a correlated-factors solution equivalent to the Cholesky solution. But, it's possible that the correlated-factors parameterization is harder to optimize.Yes, what is puzzling me is that the rP estimated are so different from rP observed! And that estimates at M=0 are totally different from estimates with no moderation. What can be the explanation here? Can we trust the results of moderation here.

The reason why we try moderation in terms of CF is the paper by Rathouz PJ et al:

Rathouz PJ, Van Hulle CA, Rodgers JL, Waldman ID, Lahey BB. Specification, testing, and interpretation of gene-by-measured-environment interaction models in the presence of gene-environment correlation. Behav Genet. 2008;38(3):301–315. doi:10.1007/s10519-008-9193-4

There they say that CF model has more power to detect moderation because it has fewer parameters to estimate. Since our power is quite limited due to a low number of twin pairs and due to a not very prevalent dichotomous outcome, we thought to give CF a try. It doesn't seem to provide any evidence of moderation either, but its fit is better, although the correlation estimates are as weird as in the Purcell's model (for two out of three moderators that we tested).

It's been a few years since I read that Rathouz et al. paper, so there's a good chance I'm mistaken in what I posted about the correlated-factors parameterization.

Hello

I just run the model posted by Julia(bivChol_Moderation.txt), but here are something wrong when there is no moderation in the model:

Error: The job for model 'MainEffects' exited abnormally with the error message: fit is not finite (Ordinal covariance is not positive definite in data 'DZ.data' row 13703 (loc1))

In addition: Warning message:

In model 'MainEffects' Optimizer returned a non-zero status code 10. Starting values are not feasible. Consider mxTryHard()

However, I don't know the exzact reason, could you give any questions.

Thanks!

Hello

I just run the model posted by Julia(bivChol_Moderation.txt), but here are something wrong when there is no moderation in the model:

Error: The job for model 'MainEffects' exited abnormally with the error message: fit is not finite (Ordinal covariance is not positive definite in data 'DZ.data' row 13703 (loc1))

In addition: Warning message:

In model 'MainEffects' Optimizer returned a non-zero status code 10. Starting values are not feasible. Consider mxTryHard()

However, I don't know the exzact reason, could you give any questions.

Thanks!

Evidently, you need better start values for the free parameters. Did you modify the block of syntax that sets the start values? Values that worked for Julia might not work well with your dataset.

Search this website for "start values". You'll find plenty of advice and discussion about the topic, e.g., this thread. Another thing you could try is to replace

`mxRun()`

with`mxTryHardOrdinal()`

in your script, and see if that helps. I can't really offer any more-specific advice without more details from you.I still can’t run my model.

Firstly, I run the following syntax, which induce some warning and error messages .

Many of Std.Error are NA in ACEmodModel .

As for MainEffectsModel(without moderator),

Secondly, I run mxGetExpected(). However, I just don’t know how to reset the start values.

I also replace mxRun() with MainEffectsFit <- mxTryHardOrdinal(MainEffectsModel). Though the error of NA are removed, the model without moderator didn’t work.

I read the suggestions you posted, but I failed to use Nelder-Mead implementation. I might be a little stupid, so I need your help.

Please!

First off, only put

`var_constraint`

in one of the MZ or DZ MxModels, not both. That won't matter if you're using NPSOL, but it will be a problem if you use CSOLNP (or Nelder-Mead).It looks like the optimizer is reaching a solution where the Hessian (as calculated) isn't positive-definite (status code 5). Your phenotype is a threshold trait, and due to the limited accuracy of the algorithm for the multivariate-normal probability integral, sometimes code 5 can occur even when the optimizer has found a minimum. Therefore, you'll want to find the solution with the smallest fitfunction value, even if it has status code 5. Try requesting more attempts from

`mxTryHardOrdinal()`

via argument`extraTries`

, e.g.`extraTries=30`

.I suggest running the main-effects model

beforethe moderation model. Use`free=FALSE`

when creating`modPathA`

,`modPathC`

, and`modPathE`

. Then, the first MxModel you run will be the main effects model. Then, create the moderation model from the fitted main-effects model, and use`omxSetParameters()`

to free the moderation parameters.What were you trying to do? Use

`omxSetParameters()`

to change free parameter values.If you want to try it, let me suggest some syntax:

Then, put

`plan`

into the`mxModel()`

statement for`MainEffectsModel`

, assuming you are creating and running`MainEffectsModel`

first. To clear the custom compute plan from an MxModel and go back to the default compute plan, do`model@compute <- NULL`

.I change the order of models, and fit the main-effect model first. I also delete the var_constraint in MZ MxModel(of course try this in DZ MxModel). However, there still error massage:

> MainEffectsFit <- mxTryHardOrdinal(MainEffectsModel)

Solution found! Final fit=-444773.78 (started at 169287.04) (11 attempt(s): 5 valid, 6 errors)

It seems the model didn’t run. I want to Try requesting more attempts from mxTryHardOrdinal() via argument extra Tries, e.g. extra Tries=30.

Then, I put plan into the mxModel() statement for MainEffectsModel, errors

plan <- omxDefaultComputePlan()

plan$steps <- list(

NM=mxComputeNelderMead(centerIniSimplex=TRUE),

GD=plan$steps$GD,ND=plan$steps$ND,SE=plan$steps$SE,RD=plan$steps$RD,RE=plan$steps$RD)

multi <- mxFitFunctionMultigroup( c("MZ","DZ") )

## ACEmodModel <- mxModel( "ACEmod", modelMZ, modelDZ, funML, multi

MainEffectsModel<- mxModel( "MainEffects", modelMZ, modelDZ, funML, multi,plan )

MainEffectsFit <- mxTryHardOrdinal(MainEffectsModel)

All fit attempts resulted in errors - check starting values or model specification

As for Try requesting more attempts from mxTryHardOrdinal() via argument extraTries, e.g. extra Tries=30, even 90, errors still.

All fit attempts resulted in errors - check starting values or model specification.

In the main effect model, start values of moderator Path Coefficients are 0, isn’t it? Do I need to change the start values of a, c, and e Path Coefficients.

pathModVal = c(0,0.1,0.1)

B_AgeVal = 0.5

B_SexVal = 0.5

## Matrices a, c, and e to store a, c, and e Path Coefficients

pathA <- mxMatrix(name = "a", type = "Lower", nrow = nv, ncol = nv, free=T, labels = aLabs, values=svPaD, lbound=lbPaD)

pathC <- mxMatrix(name = "c", type = "Lower", nrow = nv, ncol = nv, free=T, labels = cLabs, values=svPaD, lbound=lbPaD)

pathE <- mxMatrix(name = "e", type = "Lower", nrow = nv, ncol = nv, free=T, labels = eLabs, values=svPeD, lbound=lbPaD)

modPathA = mxMatrix( "Lower", nrow=nv, ncol=nv, free=c(F,F,F), values=0, labels=aModLabs, name="aMod" )

modPathC = mxMatrix( "Lower", nrow=nv, ncol=nv, free=c(F,F,F), values=0, labels=cModLabs, name="cMod" )

modPathE = mxMatrix( "Lower", nrow=nv, ncol=nv, free=c(F,F,F), values=0, labels=eModLabs, name="eMod" )