# Why mxTryHardOrdinal（）dosen't report start values?

23 posts / 0 new
Offline
Joined: 04/17/2019 - 08:58
Why mxTryHardOrdinal（）dosen't report start values?

Hi, everyone!
When i useSatFit  <- mxTryHardOrdinal(SatModel, intervals=TRUE,extraTries = 15,finetuneGradient=FALSE,bestInitsOutput=TRUE) ,the result is Solution found!  Final fit=22505.957 (started at 22819.562)  (16 attempt(s): 16 valid, 0 errors). I don't know why it dosen't report start values from the best fit.

> mxVersion()
OpenMx version: 2.12.2 [GIT v2.12.2]
R version: R version 3.3.3 (2017-03-06)
Platform: x86_64-w64-mingw32
Default optimizer: SLSQP
NPSOL-enabled?: Yes
OpenMP-enabled?: No

Many thanks!

Offline
Joined: 04/17/2019 - 08:58
model or modelfit

sorry, I have one more question: maby this question is weird, but I' m not sure which one is right, or are they all right?
1.

aFit <- mxRun(aModel, intervals=T)
bModel  <- mxModel( aModel, name="bModel" )
bModel  <- omxSetParameters( bModel , label=c(),newlabels=() ) 

2.

aFit <- mxRun(aModel, intervals=T)
bModel  <- mxModel(aFit, name="bModel" )
bModel  <- omxSetParameters( bModel , label=c(),newlabels=() ) 

3.

aFit <- mxRun(aModel, intervals=T)
bModel  <- omxSetParameters( aModel , label=c(),newlabels=(), name="bModel"  ) 

4.

aFit <- mxRun(aModel, intervals=T)
bModel  <- omxSetParameters( aFit , label=c(),newlabels=(), name="bModel"  ) 

Many thanks!

Offline
Joined: 01/24/2014 - 12:15
use silent=FALSE
I don't know why it dosen't report start values from the best fit.

Take a look at the man page for mxTryHard*. The entry for argument bestInitsOutput says:

Logical. If TRUE and if silent=FALSE, mxTryHard() displays the starting values that resulted in the best fit, according to format specified by paste argument. Defaults to TRUE.

But, the default value for silent is TRUE if running in an interactive session, and FALSE otherwise. So, set silent=FALSE.

Offline
Joined: 04/17/2019 - 08:58
nested ACE model

Thank you very much, but I am encountered with another problem and really need your help.
I am trying to fit a bivariate ACE model and nested ACE models.My script is as follows:

baseACE <- mxModel('Base',
mxMatrix( type="Full", nrow=2, ncol=2, free=T, values=svb,labels=labeta, name="Beta"),
mxMatrix( type="Zero", nrow=1, ncol=2, name="meanG" ),
### Their squared terms
mxAlgebra( a%*%t(a) , name='A'),
mxAlgebra( c%*%t(c) , name='C'),
mxAlgebra( e%*%t(e) , name='E'),
# Matrices generated to hold A, C, and E components and total Variance
mxAlgebra( expression=A+C+E, name="V" ),
# Algebra to compute total variances and standard deviations (wc_ggonal only)
mxMatrix( type="Iden", nrow=nv, ncol=nv, name="I"),
mxAlgebra( expression=solve(sqrt(I*V)), name="iSD"),
mxAlgebra( expression=solve(sqrt(I*V))%&%V, name ="Rph" ),
mxAlgebra( expression=solve(sqrt(I*A))%&%A, name ="Ra" ), #cov2cor()
mxAlgebra( expression=solve(sqrt(I*C))%&%C, name ="Rc" ),
mxAlgebra( expression=solve(sqrt(I*E))%&%E, name ="Re" ),
# Algebra to compute standardized variance components
mxAlgebra( expression=A+C+E, name="V" ),
mxAlgebra( expression=A/V, name="h2"),
mxAlgebra( expression=C/V, name="c2"),
mxAlgebra( expression=E/V, name="e2"),
#std <- mxAlgebra(expression=cbind(iSD %*% a,iSD %*% c,iSD %*% e),name="std")
mxAlgebra(expression=iSD %*% a,name="stda"),
mxAlgebra(expression=iSD %*% c,name="stdc"),
mxAlgebra(expression=iSD %*% e,name="stde"),
# Constraint on total variance of Ordinal variables (A+C+E=1)
mxMatrix( type="Unit", nrow=nv, ncol=1, name="Unv" ),
mxConstraint( expression=diag2vec(V)==Unv, name="VarL" )
)

modelMZ <- mxModel( "MZ",
mxMatrix( type="Full", nrow=2, ncol=2, free=F, label=c("data.age1","data.age2","data.region_type1","data.region_type2"), name="MZDefVars"),
mxAlgebra( expression= cbind(Base.meanG + MZDefVars[1,] %*% Base.Beta,Base.meanG + MZDefVars[2,] %*% Base.Beta), name="expMeanMZ" ),
mxMatrix( type="Full", nrow=nth, ncol=ntv, free=T, values=svTh, labels=lathmz, name="ThMZ"),
mxAlgebra( expression= rbind( cbind(Base.A+Base.C+Base.E , Base.A+Base.C),
cbind(Base.A+Base.C        , Base.A+Base.C+Base.E)), name="expCovMZ" ),
mxData( observed=mzData, type="raw" ),
mxExpectationNormal( covariance="expCovMZ", means="expMeanMZ", dimnames=selVars, thresholds="ThMZ" ),
mxFitFunctionML()
)

modelDZ <- mxModel( "DZ",
mxMatrix( type="Full", nrow=2, ncol=2, free=F, label=c("data.age1","data.age2","data.region_type1","data.region_type2"), name="DZDefVars"),
mxAlgebra( expression= cbind(Base.meanG + DZDefVars[1,] %*% Base.Beta,Base.meanG + DZDefVars[2,] %*% Base.Beta), name="expMeanDZ" ),
mxMatrix( type="Full", nrow=nth, ncol=ntv, free=T, values=svTh,labels=lathdz, name="ThDZ"),
mxAlgebra( expression= rbind( cbind(Base.A+Base.C+Base.E  , 0.5%x%Base.A+Base.C),
cbind(0.5%x%Base.A+Base.C   , Base.A+Base.C+Base.E)), name="expCovDZ" ),
mxData( observed=dzData, type="raw" ),
mxExpectationNormal( covariance="expCovDZ", means="expMeanDZ", dimnames=selVars, thresholds="ThDZ" ),
mxFitFunctionML()
)

Conf1       <- mxCI (c ('Base.h2','Base.c2','Base.e2'))
Conf2       <- mxCI (c ('Base.Rph[2,1]', 'Base.Ra[2,1]', 'Base.Rc[2,1]', 'Base.Re[2,1]') )
AceModel    <- mxModel( "ACE", baseACE,modelMZ, modelDZ, Conf1, Conf2,
mxFitFunctionMultigroup(c('MZ.fitfunction','DZ.fitfunction')))
# ------------------------------------------------------------------------------
#RUN AceModel
AceFit   <- mxTryHardOrdinal(AceModel, intervals=F,extraTries = 15,exhaustive=F)
(AceSumm  <- summary(AceFit))
round(AceFit@output\$estimate,4)
mxCompare(SatFit,AceFit)
# ------------------------------------------------------------------------------------------
# RUN SUBMODELS

# Test Significance of Covariates
dropbace  <- omxSetParameters( AceModel, label=labeta, free=FALSE, values=0,name="dropbace" )
dropbaceFit  <- mxTryHardOrdinal(dropbace, intervals=F,extraTries = 15,exhaustive=F)
(dropbaceSumm  <- summary(dropbaceFit))
mxCompare( AceFit, dropbaceFit )

#C for p1 dropped AE-ACE
dropcp1 <- omxSetParameters(AceModel,labels=c("c11","c12"),free=F,values=0,name="dropcp1")
dropcp1Fit  <- mxTryHardOrdinal(dropcp1, intervals=F,extraTries = 15,exhaustive=F)
(dropcp1Summ <- summary(dropcp1Fit))
mxCompare(AceFit,c(dropbaceFit,dropcp1Fit))  #no#

#C for p2 dropped ACE-AE
dropcp2 <- omxSetParameters(AceModel,labels=c("c12","c22"),free=F,values=0,name="dropcp2") #not sure#
dropcp2Fit  <- mxTryHardOrdinal(dropcp2, intervals=F,extraTries = 15,exhaustive=F)
(dropcp2Summ <- summary(dropcp2Fit))
mxCompare(AceFit,c(dropbaceFit,dropcp1Fit,dropcp2Fit)) #no#

#C for p1p2 overlap dropped
dropc12 <- omxSetParameters(AceModel,labels="c12",free=F,values=0,name="dropc12")
dropc12Fit  <- mxTryHardOrdinal(dropc12, intervals=F,extraTries = 15,exhaustive=F)
(dropc12Summ <- summary(dropc12Fit))
mxCompare(AceFit,c(dropbaceFit,dropcp1Fit,dropcp2Fit,dropc12Fit)) #yes#

When I fit AceModel, there is no warnings or errors. But when I fit dropcp2 model with line
dropcp2 <- omxSetParameters(AceModel,labels=c("c22"),free=F,values=0,name="dropcp2") ,
it report warnings:
*Solution found! Final fit=22505.98 (started at 24091.038) (1 attempt(s): 1 valid, 0 errors)

Warning message:
In mxTryHard(model = model, greenOK = greenOK, checkHess = checkHess, :
The model does not satisfy the first-order optimality conditions to the required accuracy, and no improved point for the merit function could be found during the final linesearch (Mx status RED)
*
The same situtation emerges when I fit dropc12 model. I don't know how to fix it.

Many thanks!

Offline
Joined: 01/24/2014 - 12:15
I'll first refer you to my

I'll first refer you to my answer to a recent question similar to yours. So, read that first.

For your case in particular, I have the following suggestions:

• The default for mxTryHardOrdinal() is to accept status code 6 (which is what the warning you saw was about). If you don't want status code 6 to be considered acceptable, pass OKstatuscodes=c(0,1,5) as argument to mxTryHardOrdinal().
• You're using an MxConstraint to identify your model. MxConstraints do make optimization more difficult. You can avoid using them in this kind of model by instead fixing the E variances to some constant (say, 1.0), and leaving the A and C variances freely estimated. To do that without MxConstraints, you'd need to reparameterize your model using the "direct-symmetric" parameterization. For details on that, see this post. Even if you decide to stick with the MxConstraints, there are a number of good reasons to use the direct-symmetric parameterization instead of the Cholesky.
Offline
Joined: 04/17/2019 - 08:58
remove the constrain

I am a novice user and don't understand the terms and theories of SEM and Cholesky thoroughly.

1.What I can tell from your suggestions is that it is MxConstraints that induce the warning. So, I delete the line:

mxMatrix( type="Unit", nrow=nv, ncol=1, name="Unv" ),
mxConstraint( expression=diag2vec(V)==Unv, name="VarL" )

and then no such warnings reported.

Acctually, I don't understand why the demo codes must constrain A+C+E=1, since without this constrain I can also get standardized h2, c2,e2 and rA, rC, rE through these lines:

 mxAlgebra( expression=A+C+E, name="V" ),
mxMatrix( type="Iden", nrow=nv, ncol=nv, name="I"),
mxAlgebra( expression=solve(sqrt(I*V)), name="iSD"),
mxAlgebra( expression=solve(sqrt(I*V))%&%V, name ="Rph" ),
mxAlgebra( expression=solve(sqrt(I*A))%&%A, name ="Ra" ), #cov2cor()
mxAlgebra( expression=solve(sqrt(I*C))%&%C, name ="Rc" ),
mxAlgebra( expression=solve(sqrt(I*E))%&%E, name ="Re" ),
# Algebra to compute standardized variance components
mxAlgebra( expression=A+C+E, name="V" ),
mxAlgebra( expression=A/V, name="h2"),
mxAlgebra( expression=C/V, name="c2"),
mxAlgebra( expression=E/V, name="e2"),
mxAlgebra(expression=iSD %*% a,name="stda"),
mxAlgebra(expression=iSD %*% c,name="stdc"),
mxAlgebra(expression=iSD %*% e,name="stde")

I am not sure if this is right.

1. Another theory I become aware of is that Cholesky parameterization constrains the A, C, and E covariance matrices must be positive-definite, and direct-symmetric parameterization does not. What's more, when I use Cholesky parameterization, the summary of the model report that:

Information matrix is not positive definite (not at a candidate optimum).
Be suspicious of these results. At minimum, do not trust the standard errors.

It seems that my model violate the assumption of Cholesky parameterization. So I turn to use the "direct-symmetric" parameterization with such lines:

baseACE <- mxModel('Base',
mxMatrix( type="Full", nrow=2, ncol=2, free=T, values=svb,labels=labeta, name="Beta"),
mxMatrix( type="Zero", nrow=1, ncol=2, name="meanG" ), # or type="Zero"

# Create Matrices for Variance Components
mxMatrix( type="Symm", nrow=nv, ncol=nv, free=TRUE, values=valDiag(svPa,nv), label=labLower("A",nv), name="A" ),
mxMatrix( type="Symm", nrow=nv, ncol=nv, free=TRUE, values=valDiag(svPa,nv), label=labLower("C",nv), name="C" ),
mxMatrix( type="Symm", nrow=nv, ncol=nv, free=TRUE, values=valDiag(svPa,nv), label=labLower("E",nv), name="E" ),

# Matrices generated to hold A, C, and E components and total Variance
mxAlgebra( expression=A+C+E, name="V" ),

# Algebra to compute total variances and standard deviations (wc_ggonal only)
mxMatrix( type="Iden", nrow=nv, ncol=nv, name="I"),
mxAlgebra( expression=solve(sqrt(I*V)), name="iSD"),
mxAlgebra( expression=solve(sqrt(I*V))%&%V, name ="Rph" ),
mxAlgebra( expression=solve(sqrt(I*A))%&%A, name ="Ra" ), #cov2cor()
mxAlgebra( expression=solve(sqrt(I*C))%&%C, name ="Rc" ),
mxAlgebra( expression=solve(sqrt(I*E))%&%E, name ="Re" ),

# Algebra to compute standardized variance components
mxAlgebra( expression=A/V, name="h2"),
mxAlgebra( expression=C/V, name="c2"),
mxAlgebra( expression=E/V, name="e2")

)

Similarly, I delete the constraint of A+C+E=1 from the demo codes I found. And then I try to fit AE-ACE, ACE-AE and ACE-ACE without C overlap through such lines:

#C for p1 dropped,
dropcp1 <- omxSetParameters(AceModel,labels="Base.C[1,1]",free=F,values=0,name="dropcp1")
#C for p2 dropped,
dropcp1 <- omxSetParameters(AceModel,labels="Base.C[2,2]",free=F,values=0,name="dropcp2")
#C for p1p2 overlap dropped,
dropcp1 <- omxSetParameters(AceModel,labels="Base.C[1,2]",free=F,values=0,name="dropcp1p2")

Ultimately, the results reported by Cholesky parameterization and direct-symmetric parameterization are similar. So, I decide to use Cholesky parameterization and just delete the constraint of A+C+E=1 as I mentioned above.

I'm not sure if I truely get your point and really lookforward to your comment and suggestions.

Thanks again.

Offline
Joined: 01/24/2014 - 12:15

As I explained in your previous thread, your model won't be identified without some constraint on the variance of the latent "liability". If you delete the MxConstraint from your model, you need to add some other constraint to identify it. If you're using the direct-symmetric parameterization, then you could do (I'm assuming that both traits are threshold variables):

#Start value for off-diagonal is arbitrary
mxMatrix( type="Symm", nrow=nv, ncol=nv, free=c(F,T,F), values=c(1, 0.1, 1), label=labLower("E",nv), name="E" )

That will fix the nonshared-environmental variance in both traits to 1.0. The additive-genetic and shared-environmental variances will then be interpretable as the factor by which they compare to the nonshared-environmental variance. For example, suppose the A variance is estimated at 1.3 and the C variance is estimated at 0.9. The interpretation would be that the A variance is 30% greater than the E variance, and the C variance is 10% less than the E variance. You already know about the MxAlgebras to get "standardized" estimates.

Putting the MxConstraint back into your model would also suffice to identify it.

1.What I can tell from your suggestions is that it is MxConstraints that induce the warning.

That's not so. Let me explain what the warning means. When any of OpenMx's 3 main optimizers thinks it's found a local minimum of the -2logL, it checks to see if the gradient (the vector of first partial derivatives of the -2logL with respect to the free parameters)--adjusted for MxConstraints if there are any--is sufficiently close to the origin (all zeroes). That is a first-order condition for the solution being a local minimum. If the gradient is not sufficiently close to the origin, OpenMx warns, with the warning you saw (status code 6). With all-continuous data, status code 6 is definitely a sign something is wrong. But with ordinal-threshold data, sometimes status code 6 is reported even when OpenMx has found a local minimum. That's why mxTryHardOrdinal()'s defaults are what they are. The reason that status code 6 can be unavoidable is the limited numerical accuracy of the algorithm that calculates multivariate-normal probability integrals (as I explained in one of the posts I linked earlier in this thread). I only brought up the MxConstraint because it is true that MxConstraints make it harder for the optimizer to find a good solution.

Information matrix is not positive definite (not at a candidate optimum).
Be suspicious of these results. At minimum, do not trust the standard errors.

It seems that my model violate the assumption of Cholesky parameterization. So I turn to use the "direct-symmetric" parameterization with such lines:

The Cholesky parameterization merely ensures that the A, C, and E variance-covariance matrices are positive-definite. It doesn't say anything about the information matrix (which in most OpenMx cases is the matrix of second partial derivatives of the -2logL w/r/t the free parameters at the solution). Incidentally, that matrix being positive-definite is a second-order condition for the solution being a local minimum.

Offline
Joined: 04/17/2019 - 08:58

1. But I am still confused that why the demo codes(code1,code2)of binary-binary variables don't fix E variances？

2. Another question（mybe stupid）is I don't know the functions valDiag and labLower used in the demo codes are contained in which package, and I can't search any informations about these functions.

Thanks!

Offline
Joined: 04/17/2019 - 08:58
oh, I see, I need to fix E

oh, I see, I need to fix E variance when warning reported(status code 6), otherwise E can be freely estimated, is it right?

and the way I define nested ACE model is ok?

Thanks!

Offline
Joined: 04/17/2019 - 08:58
Putting the MxConstraint back

Putting the MxConstraint back into your model would also suffice to identify it.

Did you mean I can fix E variance to 1, and at the same time put

mxMatrix( type="Unit", nrow=nv, ncol=1, name="Unv" ),
mxConstraint( expression=diag2vec(V)==Unv, name="VarL" ),

into my model? But they seem contradictory. Or should I constrain A+C+E to another value?
I'm so sorry for troubling you so many times!

Thanks,sincerely!

Offline
Joined: 01/24/2014 - 12:15
model identification again
1. But I am still confused that why the codes(code1,code2)of binary-binary variables don't fix E variances？

The interpretation of the parameter estimates is easier if the variances are constrained to 1.0, because (1) the A, C, and E variance components are variance proportions (i.e., standardized), and (2) the off-diagonals of V are tetrachoric correlations.

2. Another question（mybe stupid）is I don't know the functions **valDiag** and **labLower** used in the demo codes are contained in which package, and I can't search any informations about these functions.

They're from a collection of helper functions maintained by Hermine Maes (but they really ought to be turned into an R package).

As for your other questions, you need to either fix the E variance to 1, or reintroduce the MxConstraint, but not both. Whichever you choose, you need to do in all of your models with this dataset, because otherwise, the models won't be identified. If the way you're doing your nested models is relevant to your hypotheses / research questions, then it looks OK to me.

Offline
Joined: 04/17/2019 - 08:58
Many many thanks!

Best wishes to you!

Offline
Joined: 04/17/2019 - 08:58
Hello again!

Hello again!

I retried to fit my model with mxTryHard() instead of mxTryHardOrdinal(), and the result was:

Retry limit reached; solution not found. Best fit=5350.5814 (started at 10097.602) (26 attempt(s): 26 valid, 0 errors)

However, when I use mxTryHardOrdinal(), the solution can be easily found with only one attempt, although the estimated parameters are slightly different each time I run the same code.(PS, I use set.seed(1) before the call to mxTryHardOrdinal(), but the results are still inconsistent and I don't know why)

My question is can I trust the result of mxTryHardOrdinal(), even though mxTryHard() can't find solution.

(´･ᴗ･)

Offline
Joined: 01/24/2014 - 12:15
different defaults

You should look again at the man page for mxTryHard*. mxTryHardOrdinal() is merely a wrapper to mxTryHard() with different default argument values, tailored toward analyses of threshold variables. The underlying code is the same. The only thing that differs are the defaults. In particular, by default, status codes 5 and 6 (both are status Red) are considered "acceptable" with mxTryHardOrdinal(), but not with mxTryHard(). I believe that would explain what you report.

Also note that the default for mxTryHardOrdinal() is exhaustive=TRUE. That's because sometimes it's nearly impossible to avoid status code 5 or 6 with threshold variables. So, since the usual tests for first- and second-order optimality conditions might not be reliable, the sensible thing to do is use up all of the extra attempts, and return the solution with the smallest fitfunction value. I don't advise using exhaustive=FALSE unless you at least drop 6 from OKstatuscodes.

I'll refer you again to this post of mine. Have you tried lowering 'mvnRelEps' or increasing any of the 'mvnMaxPoints*' options? Also, if you're not using any MxConstraints, have you tried CSOLNP?

(PS, I use set.seed(1) before the call to mxTryHardOrdinal(), but the results are still inconsistent and I don't know why)

Are you running set.seed() before running mxTryHard* every time?

Offline
Joined: 04/17/2019 - 08:58
It's really confusing.(T＿T)

It's really confusing.(T＿T)

1. I truely run set.seed(1) firstly whenever running  mxTryHardOrdinal() and  mxTryHard(), but the numbers in the forth/fifth decimal place of parameters are different, please see picture. Is such slight inconsistency normal?

2. After I run this line:
mxOption( NULL, "mvnRelEps", value= mxOption(NULL, key="mvnRelEps")/5)
 mxTryHard() can find solutions, but the results are still instable. It might find solutions after different attempts, or sometimes it still showed"solution not found". I don't know if this is normal,too.

> set.seed(190503)
> eCTCToFit<- mxTryHard(eCTCToModel, intervals=F,extraTries = 10,finetuneGradient=FALSE)

Retry limit reached; solution not found.  Best fit=5350.5775 (started at 9974.4539)  (11 attempt(s): 11 valid, 0 errors)

> set.seed(190503)
> eCTCToFit<- mxTryHard(eCTCToModel, intervals=F,extraTries = 10,finetuneGradient=FALSE)

Solution found!  Final fit=5350.5776 (started at 9974.4539)  (5 attempt(s): 5 valid, 0 errors)

> set.seed(190503)
> eCTCToFit<- mxTryHard(eCTCToModel, intervals=F,extraTries = 10,finetuneGradient=FALSE)

Solution found!  Final fit=5350.5776 (started at 9974.4539)  (1 attempt(s): 1 valid, 0 errors)
1. Now it seems that I can still trust the results of  mxTryHardOrdinal() , since  mxTryHard() also works and the estimated parameters are relatively stable(unless the parameters should be exactly the same whenever I run the same code).

And I need to set exhaustive=T , but I still unsure about the number of extratries. Because I don't want to spend a lot of time running models(especially I need to fit about 10 nested models), and on the other hand I'm afraid of the small number of attempt I set is not enough to find the best result. If you don't have another suggestions, I would set extratries=15.

At last, maybe such issues I struggled with are really unnecessary, really thanks for your time!

File attachments:
Offline
Joined: 04/17/2019 - 08:58
the proportion of ace influences

Hello!

I wanted to calcualte how much of the observational association between two phenotypes is due to A,C and E, and I added these lines to the models:

     # the proportion of ace influences contributing to the overlap
mxAlgebra( expression=sqrt(h2[1,1])*sqrt(h2[2,2])*Ra[1,2]/Rph[1,2],name = "pa"),
mxAlgebra( expression=sqrt(c2[1,1])*sqrt(c2[2,2])*Rc[1,2]/Rph[1,2],name = "pc"),
mxAlgebra( expression=sqrt(e2[1,1])*sqrt(e2[2,2])*Re[1,2]/Rph[1,2],name = "pe") 

Subsequently I found that if Ra/Rc/Re is negative, then pa/pc/pe is also might be negative, but they represent proportions and should be positive and less than 1. I don't know how to explain.

Thank you very much!

Offline
Joined: 04/17/2019 - 08:58
I should choose which nested ACE model

Hello，I have another question，again.(´•༝•)

I aim to fit nested ACE models by dropping A/C/E step by step and to test if these nested models result in a significant deterioration of the fit(all these models are compared to full ACE model).

As it shows in the uploaded picture, A for smoke, overlap and E for overlap can be dropped from the full ACE model, but the best-fitting model is the model with A for smoke and overlap dropped( model NO.15) . If I choose to report the results of NO.15 model, then the CI of rE (E correlation) contains 0.

So, my question is that which model I should choose, the best-fitting model(model NO.15) or the model dropping all the variances/covariances that can be dropped(model NO.16).

Thanks!

File attachments:
Offline
Joined: 01/24/2014 - 12:15
I truely run set.seed(1) firstly whenever running mxTryHardOrdinal() and mxTryHard(), but the numbers in the forth/fifth decimal place of parameters are different, please see picture. Is such slight inconsistency normal?

I'd say they're close enough.

After I run this line:
mxOption( NULL, "mvnRelEps", value= mxOption(NULL, key="mvnRelEps")/5)
mxTryHard() can find solutions, but the results are still instable. It might find solutions after different attempts, or sometimes it still showed"solution not found". I don't know if this is normal,too.

If decreasing 'mvnRelEps' is enabling vanilla mxTryHard() to find an acceptable solution, that's a good sign.

And I need to set exhaustive=T, but I still unsure about the number of extratries. Because I don't want to spend a lot of time running models(especially I need to fit about 10 nested models), and on the other hand I'm afraid of the small number of attempt I set is not enough to find the best result. If you don't have another suggestions, I would set extratries=15.

If you're comfortable with extraTries=15, then that sounds OK to me.

Subsequently I found that if Ra/Rc/Re is negative, then pa/pc/pe is also might be negative, but they represent proportions and should be positive and less than 1. I don't know how to explain.

If something isn't bounded to the interval [0,1], then logically, it CAN'T be interpreted as a proportion. Where did you get that block of syntax, anyhow? It's not clear to me what the algebras are trying to compute.

So, my question is that which model I should choose, the best-fitting model(model NO.15) or the model dropping all the variances/covariances that can be dropped(model NO.16).

If you're only going to base your conclusions on a single "best" model, then model 15 is the best according to AIC. But, I don't encourage basing your conclusions on one "best" model unless the best is far and away superior to all the others. Instead, consider "multi-model inference". Take a look at the man page for omxAkaikeWeights and mxModelAverage(), and at a relevant publication of mine.

Offline
Joined: 04/17/2019 - 08:58

I calculated the proportions of rA/rC/rE contributing to rph following the formula in the picture. I also saw others calsulated proportions in this way in their papers, such as table 5 in this paper

File attachments:
Offline
Joined: 01/24/2014 - 12:15
OK, I see what the algebras

OK, I see what the algebras are doing now.

Offline
Joined: 04/17/2019 - 08:58
do you know other ways to calculate pa/pc/pe?

So if Ra/Rc/Re is negative, then pa/pc/pe also might be negative or even more than 1 to guarantee pa+pc+pe=1. In this case, do you know other ways to calculate pa/pc/pe?

Many thanks!

Offline
Joined: 01/24/2014 - 12:15
You have the correct formula

You have the correct formula in your MxAlgebras. But I'm not sure you're getting the point. There is no reason why pa, pc, or pe have to be bounded to the unit interval. I have explained that fact several times before on these forums. See this other thread, and in particular, the links I embedded into this post in particular.

Offline
Joined: 04/17/2019 - 08:58
Thank you very much! I see

Thank you very much! I see that in this case I should report the contribution to the phenotypic correlation rather than proportion.