Hi, everyone!

When i use`SatFit <- mxTryHardOrdinal(SatModel, intervals=TRUE,extraTries = 15,finetuneGradient=FALSE,bestInitsOutput=TRUE)`

,the result is `Solution found! Final fit=22505.957 (started at 22819.562) (16 attempt(s): 16 valid, 0 errors)`

. I don't know why it dosen't report start values from the best fit.

> mxVersion()

OpenMx version: 2.12.2 [GIT v2.12.2]

R version: R version 3.3.3 (2017-03-06)

Platform: x86_64-w64-mingw32

Default optimizer: SLSQP

NPSOL-enabled?: Yes

OpenMP-enabled?: No

Many thanks!

sorry, I have one more question: maby this question is weird, but I' m not sure which one is right, or are they all right?

1.

2.

3.

4.

Many thanks!

Take a look at the man page for mxTryHard*. The entry for argument

`bestInitsOutput`

says:But, the default value for

`silent`

is`TRUE`

if running in an interactive session, and`FALSE`

otherwise. So, set`silent=FALSE`

.Thank you very much, but I am encountered with another problem and really need your help.

I am trying to fit a bivariate ACE model and nested ACE models.My script is as follows:

When I fit

AceModel, there is no warnings or errors. But when I fitdropcp2model with line`dropcp2 <- omxSetParameters(AceModel,labels=c("c22"),free=F,values=0,name="dropcp2")`

,it report warnings:

*Solution found! Final fit=22505.98 (started at 24091.038) (1 attempt(s): 1 valid, 0 errors)

Warning message:

In mxTryHard(model = model, greenOK = greenOK, checkHess = checkHess, :

The model does not satisfy the first-order optimality conditions to the required accuracy, and no improved point for the merit function could be found during the final linesearch (Mx status RED)

*

The same situtation emerges when I fit

dropc12model. I don't know how to fix it.Many thanks!

I'll first refer you to my answer to a recent question similar to yours. So, read that first.

For your case in particular, I have the following suggestions:

`mxTryHardOrdinal()`

is to accept status code 6 (which is what the warning you saw was about). If you don't want status code 6 to be considered acceptable, pass`OKstatuscodes=c(0,1,5)`

as argument to`mxTryHardOrdinal()`

.Evariances to some constant (say, 1.0), and leaving theAandCvariances freely estimated. To do that without MxConstraints, you'd need to reparameterize your model using the "direct-symmetric" parameterization. For details on that, see this post. Even if you decide to stick with the MxConstraints, there are a number of good reasons to use the direct-symmetric parameterization instead of the Cholesky.Thanks a lot for your reply!

I am a novice user and don't understand the terms and theories of SEM and Cholesky thoroughly.

1.What I can tell from your suggestions is that it is

MxConstraintsthat induce the warning. So, I delete the line:and then no such warnings reported.

Acctually, I don't understand why the demo codes must constrain A+C+E=1, since without this constrain I can also get standardized h2, c2,e2 and rA, rC, rE through these lines:

I am not sure if this is right.

Information matrix is not positive definite (not at a candidate optimum).Be suspicious of these results. At minimum, do not trust the standard errors.

It seems that my model violate the assumption of Cholesky parameterization. So I turn to use the "direct-symmetric" parameterization with such lines:

Similarly, I

delete the constraint of A+C+E=1from the demo codes I found. And then I try to fit AE-ACE, ACE-AE and ACE-ACE without C overlap through such lines:Ultimately, the results reported by Cholesky parameterization and direct-symmetric parameterization are similar. So, I decide to use Cholesky parameterization and just delete the constraint of A+C+E=1 as I mentioned above.

I'm not sure if I truely get your point and really lookforward to your comment and suggestions.

Thanks again.

As I explained in your previous thread, your model won't be identified without some constraint on the variance of the latent "liability". If you delete the MxConstraint from your model, you need to add some other constraint to identify it. If you're using the direct-symmetric parameterization, then you could do (I'm assuming that both traits are threshold variables):

That will fix the nonshared-environmental variance in both traits to 1.0. The additive-genetic and shared-environmental variances will then be interpretable as the factor by which they compare to the nonshared-environmental variance. For example, suppose the

Avariance is estimated at 1.3 and theCvariance is estimated at 0.9. The interpretation would be that theAvariance is 30% greater than theEvariance, and theCvariance is 10% less than theEvariance. You already know about the MxAlgebras to get "standardized" estimates.Putting the MxConstraint back into your model would also suffice to identify it.

That's not so. Let me explain what the warning means. When any of OpenMx's 3 main optimizers thinks it's found a local minimum of the -2logL, it checks to see if the gradient (the vector of first partial derivatives of the -2logL with respect to the free parameters)--adjusted for MxConstraints if there are any--is sufficiently close to the origin (all zeroes). That is a first-order condition for the solution being a local minimum. If the gradient is not sufficiently close to the origin, OpenMx warns, with the warning you saw (status code 6). With all-continuous data, status code 6 is definitely a sign something is wrong. But with ordinal-threshold data, sometimes status code 6 is reported even when OpenMx

hasfound a local minimum. That's why`mxTryHardOrdinal()`

's defaults are what they are. The reason that status code 6 can be unavoidable is the limited numerical accuracy of the algorithm that calculates multivariate-normal probability integrals (as I explained in one of the posts I linked earlier in this thread). I only brought up the MxConstraint because it is true that MxConstraints make it harder for the optimizer to find a good solution.The Cholesky parameterization merely ensures that the

A,C, andEvariance-covariance matrices are positive-definite. It doesn't say anything about the information matrix (which in most OpenMx cases is the matrix of second partial derivatives of the -2logL w/r/t the free parameters at the solution). Incidentally, that matrix being positive-definite is asecond-order condition for the solution being a local minimum.Thanks for your detailed explaination!

But I am still confused that why the demo codes(code1,code2)of binary-binary variables don't fix E variances？

Another question（mybe stupid）is I don't know the functions

valDiagandlabLowerused in the demo codes are contained in which package, and I can't search any informations about these functions.Thanks!

oh, I see, I need to fix E variance when warning reported(status code 6), otherwise E can be freely estimated, is it right?

and the way I define nested ACE model is ok?

Thanks!

Putting the MxConstraint back into your model would also suffice to identify it.

Did you mean I can fix E variance to 1, and at the same time put

into my model? But they seem contradictory. Or should I constrain A+C+E to another value?

I'm so sorry for troubling you so many times!

Thanks,sincerely!

The interpretation of the parameter estimates is easier if the variances are constrained to 1.0, because (1) the

A,C, andEvariance components are variance proportions (i.e., standardized), and (2) the off-diagonals of`V`

are tetrachoric correlations.They're from a collection of helper functions maintained by Hermine Maes (but they really

oughtto be turned into an R package).As for your other questions, you need to either fix the

Evariance to 1, or reintroduce the MxConstraint, but not both. Whichever you choose, you need to do in all of your models with this dataset, because otherwise, the models won't be identified. If the way you're doing your nested models is relevant to your hypotheses / research questions, then it looks OK to me.Attribute to your helpful suggestions, my code works fine now.

Best wishes to you!

Hello again!

I retried to fit my model with

`mxTryHard()`

instead of`mxTryHardOrdinal()`

, and the result was:eCTCToFit<- mxTryHard(eCTCToModel, intervals=F,extraTries = 25,exhaustive=F,finetuneGradient=FALSE)Retry limit reached; solution not found. Best fit=5350.5814 (started at 10097.602) (26 attempt(s): 26 valid, 0 errors)However, when I use

`mxTryHardOrdinal()`

, the solution can be easily found with only one attempt, although the estimated parameters are slightly different each time I run the same code.(PS, I use`set.seed(1)`

before the call to`mxTryHardOrdinal()`

, but the results are still inconsistent and I don't know why)My question is can I trust the result of

`mxTryHardOrdinal()`

, even though`mxTryHard()`

can't find solution.(´･ᴗ･`)

You should look again at the man page for mxTryHard*.

`mxTryHardOrdinal()`

is merely a wrapper to`mxTryHard()`

with different default argument values, tailored toward analyses of threshold variables. The underlying code is the same. The only thing that differs are the defaults. In particular, by default, status codes 5 and 6 (both are status Red) are considered "acceptable" with`mxTryHardOrdinal()`

, but not with`mxTryHard()`

. I believe that would explain what you report.Also note that the default for

`mxTryHardOrdinal()`

is`exhaustive=TRUE`

. That's because sometimes it's nearly impossible to avoid status code 5 or 6 with threshold variables. So, since the usual tests for first- and second-order optimality conditions might not be reliable, the sensible thing to do is use upallof the extra attempts, and return the solution with the smallest fitfunction value. I don't advise using`exhaustive=FALSE`

unless you at least drop 6 from`OKstatuscodes`

.I'll refer you again to this post of mine. Have you tried lowering 'mvnRelEps' or increasing any of the 'mvnMaxPoints*' options? Also, if you're not using any MxConstraints, have you tried CSOLNP?

Are you running

`set.seed()`

before running mxTryHard*everytime?It's really confusing.(T＿T)

I truely run

`set.seed(1)`

firstly whenever running`mxTryHardOrdinal()`

and`mxTryHard()`

, but the numbers in the forth/fifth decimal place of parameters are different, please see picture. Is such slight inconsistency normal?After I run this line:

`mxOption( NULL, "mvnRelEps", value= mxOption(NULL, key="mvnRelEps")/5)`

`mxTryHard()`

can find solutions, but the results are still instable. It might find solutions after different attempts, or sometimes it still showed"solution not found". I don't know if this is normal,too.`mxTryHardOrdinal()`

, since`mxTryHard()`

also works and the estimated parameters are relatively stable(unless the parameters should be exactly the same whenever I run the same code).And I need to set

`exhaustive=T`

, but I still unsure about the number ofextratries. Because I don't want to spend a lot of time running models(especially I need to fit about 10 nested models), and on the other hand I'm afraid of the small number of attempt I set is not enough to find the best result. If you don't have another suggestions, I would set`extratries=15`

.At last, maybe such issues I struggled with are really unnecessary, really thanks for your time!

Hello!

I wanted to calcualte how much of the observational association between two phenotypes is due to A,C and E, and I added these lines to the models:

Subsequently I found that if Ra/Rc/Re is negative, then pa/pc/pe is also might be negative, but they represent proportions and should be positive and less than 1. I don't know how to explain.

Thank you very much!

Hello，I have another question，again.(´•༝•`)

I aim to fit nested ACE models by dropping A/C/E step by step and to test if these nested models result in a significant deterioration of the fit(all these models are compared to full ACE model).

As it shows in the uploaded picture, A for smoke, overlap and E for overlap can be dropped from the full ACE model, but the best-fitting model is the model with A for smoke and overlap dropped( model NO.15) . If I choose to report the results of NO.15 model, then the CI of rE (E correlation) contains 0.

So, my question is that which model I should choose, the best-fitting model(model NO.15) or the model dropping all the variances/covariances that can be dropped(model NO.16).

Thanks!

I'd say they're close enough.

If decreasing 'mvnRelEps' is enabling vanilla

`mxTryHard()`

to find an acceptable solution, that's a good sign.If you're comfortable with

`extraTries=15`

, then that sounds OK to me.If something isn't bounded to the interval [0,1], then logically, it CAN'T be interpreted as a proportion. Where did you get that block of syntax, anyhow? It's not clear to me what the algebras are trying to compute.

If you're only going to base your conclusions on a single "best" model, then model 15 is the best according to AIC. But, I don't encourage basing your conclusions on one "best" model unless the best is far and away superior to all the others. Instead, consider "multi-model inference". Take a look at the man page for

`omxAkaikeWeights`

and`mxModelAverage()`

, and at a relevant publication of mine.Hi！ Thanks for your answers.

I calculated the proportions of rA/rC/rE contributing to rph following the formula in the picture. I also saw others calsulated proportions in this way in their papers, such as table 5 in this paper

OK, I see what the algebras are doing now.

So if Ra/Rc/Re is negative, then pa/pc/pe also might be negative or even more than 1 to guarantee pa+pc+pe=1. In this case, do you know other ways to calculate pa/pc/pe?

Many thanks!

You have the correct formula in your MxAlgebras. But I'm not sure you're getting the point. There is no reason why pa, pc, or pe have to be bounded to the unit interval. I have explained that fact several times before on these forums. See this other thread, and in particular, the links I embedded into this post in particular.

Thank you very much! I see that in this case I should report the contribution to the phenotypic correlation rather than proportion.