Why mxTryHardOrdinal()dosen't report start values?
Posted on
diana
Joined: 04/17/2019
Forums
Hi, everyone!
When i use
When i use
SatFit <- mxTryHardOrdinal(SatModel, intervals=TRUE,extraTries = 15,finetuneGradient=FALSE,bestInitsOutput=TRUE)
,the result is Solution found! Final fit=22505.957 (started at 22819.562) (16 attempt(s): 16 valid, 0 errors)
. I don't know why it dosen't report start values from the best fit.
> mxVersion()
OpenMx version: 2.12.2 [GIT v2.12.2]
R version: R version 3.3.3 (2017-03-06)
Platform: x86_64-w64-mingw32
Default optimizer: SLSQP
NPSOL-enabled?: Yes
OpenMP-enabled?: No
Many thanks!
model or modelfit
1.
aFit <- mxRun(aModel, intervals=T)
bModel <- mxModel( aModel, name="bModel" )
bModel <- omxSetParameters( bModel , label=c(),newlabels=() )
2.
aFit <- mxRun(aModel, intervals=T)
bModel <- mxModel(aFit, name="bModel" )
bModel <- omxSetParameters( bModel , label=c(),newlabels=() )
3.
aFit <- mxRun(aModel, intervals=T)
bModel <- omxSetParameters( aModel , label=c(),newlabels=(), name="bModel" )
4.
aFit <- mxRun(aModel, intervals=T)
bModel <- omxSetParameters( aFit , label=c(),newlabels=(), name="bModel" )
Many thanks!
Log in or register to post comments
use silent=FALSE
Take a look at the man page for mxTryHard*. The entry for argument `bestInitsOutput` says:
But, the default value for `silent` is `TRUE` if running in an interactive session, and `FALSE` otherwise. So, set `silent=FALSE`.
Log in or register to post comments
In reply to use silent=FALSE by AdminRobK
nested ACE model
I am trying to fit a bivariate ACE model and nested ACE models.My script is as follows:
baseACE <- mxModel('Base',
mxMatrix( type="Full", nrow=2, ncol=2, free=T, values=svb,labels=labeta, name="Beta"),
mxMatrix( type="Zero", nrow=1, ncol=2, name="meanG" ),
mxMatrix(type='Lower',ncol=nv,nrow=nv,free=T,values=c(.6,2,2),labels=c('a11','a12','a22'),lbound=lbPaD,name='a'),
mxMatrix(type='Lower',ncol=nv,nrow=nv,free=T,values=c(1,0,2),labels=c('c11','c12','c22'),lbound=lbPaD,name='c'),
mxMatrix(type='Lower',ncol=nv,nrow=nv,free=T,values=c(.6,2,2),labels=c('e11','e12','e22'),lbound=lbPaD,name='e'),
### Their squared terms
mxAlgebra( a%*%t(a) , name='A'),
mxAlgebra( c%*%t(c) , name='C'),
mxAlgebra( e%*%t(e) , name='E'),
# Matrices generated to hold A, C, and E components and total Variance
mxAlgebra( expression=A+C+E, name="V" ),
# Algebra to compute total variances and standard deviations (wc_ggonal only)
mxMatrix( type="Iden", nrow=nv, ncol=nv, name="I"),
mxAlgebra( expression=solve(sqrt(I*V)), name="iSD"),
mxAlgebra( expression=solve(sqrt(I*V))%&%V, name ="Rph" ),
mxAlgebra( expression=solve(sqrt(I*A))%&%A, name ="Ra" ), #cov2cor()
mxAlgebra( expression=solve(sqrt(I*C))%&%C, name ="Rc" ),
mxAlgebra( expression=solve(sqrt(I*E))%&%E, name ="Re" ),
# Algebra to compute standardized variance components
mxAlgebra( expression=A+C+E, name="V" ),
mxAlgebra( expression=A/V, name="h2"),
mxAlgebra( expression=C/V, name="c2"),
mxAlgebra( expression=E/V, name="e2"),
#std <- mxAlgebra(expression=cbind(iSD %*% a,iSD %*% c,iSD %*% e),name="std")
mxAlgebra(expression=iSD %*% a,name="stda"),
mxAlgebra(expression=iSD %*% c,name="stdc"),
mxAlgebra(expression=iSD %*% e,name="stde"),
# Constraint on total variance of Ordinal variables (A+C+E=1)
mxMatrix( type="Unit", nrow=nv, ncol=1, name="Unv" ),
mxConstraint( expression=diag2vec(V)==Unv, name="VarL" )
)
modelMZ <- mxModel( "MZ",
mxMatrix( type="Full", nrow=2, ncol=2, free=F, label=c("data.age1","data.age2","data.region_type1","data.region_type2"), name="MZDefVars"),
mxAlgebra( expression= cbind(Base.meanG + MZDefVars[1,] %*% Base.Beta,Base.meanG + MZDefVars[2,] %*% Base.Beta), name="expMeanMZ" ),
mxMatrix( type="Full", nrow=nth, ncol=ntv, free=T, values=svTh, labels=lathmz, name="ThMZ"),
mxAlgebra( expression= rbind( cbind(Base.A+Base.C+Base.E , Base.A+Base.C),
cbind(Base.A+Base.C , Base.A+Base.C+Base.E)), name="expCovMZ" ),
mxData( observed=mzData, type="raw" ),
mxExpectationNormal( covariance="expCovMZ", means="expMeanMZ", dimnames=selVars, thresholds="ThMZ" ),
mxFitFunctionML()
)
modelDZ <- mxModel( "DZ",
mxMatrix( type="Full", nrow=2, ncol=2, free=F, label=c("data.age1","data.age2","data.region_type1","data.region_type2"), name="DZDefVars"),
mxAlgebra( expression= cbind(Base.meanG + DZDefVars[1,] %*% Base.Beta,Base.meanG + DZDefVars[2,] %*% Base.Beta), name="expMeanDZ" ),
mxMatrix( type="Full", nrow=nth, ncol=ntv, free=T, values=svTh,labels=lathdz, name="ThDZ"),
mxAlgebra( expression= rbind( cbind(Base.A+Base.C+Base.E , 0.5%x%Base.A+Base.C),
cbind(0.5%x%Base.A+Base.C , Base.A+Base.C+Base.E)), name="expCovDZ" ),
mxData( observed=dzData, type="raw" ),
mxExpectationNormal( covariance="expCovDZ", means="expMeanDZ", dimnames=selVars, thresholds="ThDZ" ),
mxFitFunctionML()
)
Conf1 <- mxCI (c ('Base.h2','Base.c2','Base.e2'))
Conf2 <- mxCI (c ('Base.Rph[2,1]', 'Base.Ra[2,1]', 'Base.Rc[2,1]', 'Base.Re[2,1]') )
AceModel <- mxModel( "ACE", baseACE,modelMZ, modelDZ, Conf1, Conf2,
mxFitFunctionMultigroup(c('MZ.fitfunction','DZ.fitfunction')))
# ------------------------------------------------------------------------------
#RUN AceModel
AceFit <- mxTryHardOrdinal(AceModel, intervals=F,extraTries = 15,exhaustive=F)
(AceSumm <- summary(AceFit))
round(AceFit@output$estimate,4)
mxCompare(SatFit,AceFit)
# ------------------------------------------------------------------------------------------
# RUN SUBMODELS
# Test Significance of Covariates
dropbace <- omxSetParameters( AceModel, label=labeta, free=FALSE, values=0,name="dropbace" )
dropbaceFit <- mxTryHardOrdinal(dropbace, intervals=F,extraTries = 15,exhaustive=F)
(dropbaceSumm <- summary(dropbaceFit))
mxCompare( AceFit, dropbaceFit )
#C for p1 dropped AE-ACE
dropcp1 <- omxSetParameters(AceModel,labels=c("c11","c12"),free=F,values=0,name="dropcp1")
dropcp1Fit <- mxTryHardOrdinal(dropcp1, intervals=F,extraTries = 15,exhaustive=F)
(dropcp1Summ <- summary(dropcp1Fit))
mxCompare(AceFit,c(dropbaceFit,dropcp1Fit)) #no#
#C for p2 dropped ACE-AE
dropcp2 <- omxSetParameters(AceModel,labels=c("c12","c22"),free=F,values=0,name="dropcp2") #not sure#
dropcp2Fit <- mxTryHardOrdinal(dropcp2, intervals=F,extraTries = 15,exhaustive=F)
(dropcp2Summ <- summary(dropcp2Fit))
mxCompare(AceFit,c(dropbaceFit,dropcp1Fit,dropcp2Fit)) #no#
#C for p1p2 overlap dropped
dropc12 <- omxSetParameters(AceModel,labels="c12",free=F,values=0,name="dropc12")
dropc12Fit <- mxTryHardOrdinal(dropc12, intervals=F,extraTries = 15,exhaustive=F)
(dropc12Summ <- summary(dropc12Fit))
mxCompare(AceFit,c(dropbaceFit,dropcp1Fit,dropcp2Fit,dropc12Fit)) #yes#
When I fit **AceModel**, there is no warnings or errors. But when I fit **dropcp2** model with line
dropcp2 <- omxSetParameters(AceModel,labels=c("c22"),free=F,values=0,name="dropcp2")
,it report warnings:
*Solution found! Final fit=22505.98 (started at 24091.038) (1 attempt(s): 1 valid, 0 errors)
Warning message:
In mxTryHard(model = model, greenOK = greenOK, checkHess = checkHess, :
The model does not satisfy the first-order optimality conditions to the required accuracy, and no improved point for the merit function could be found during the final linesearch (Mx status RED)
*
The same situtation emerges when I fit **dropc12** model. I don't know how to fix it.
Many thanks!
Log in or register to post comments
In reply to nested ACE model by diana
I'll first refer you to my
For your case in particular, I have the following suggestions:
mxTryHardOrdinal()
is to accept status code 6 (which is what the warning you saw was about). If you don't want status code 6 to be considered acceptable, passOKstatuscodes=c(0,1,5)
as argument tomxTryHardOrdinal()
.Log in or register to post comments
In reply to I'll first refer you to my by AdminRobK
remove the constrain
I am a novice user and don't understand the terms and theories of SEM and Cholesky thoroughly.
1.What I can tell from your suggestions is that it is **MxConstraints** that induce the warning. So, I delete the line:
mxMatrix( type="Unit", nrow=nv, ncol=1, name="Unv" ),
mxConstraint( expression=diag2vec(V)==Unv, name="VarL" )
and then no such warnings reported.
Acctually, I don't understand why the demo codes must constrain A+C+E=1, since without this constrain I can also get standardized h2, c2,e2 and rA, rC, rE through these lines:
mxAlgebra( expression=A+C+E, name="V" ),
mxMatrix( type="Iden", nrow=nv, ncol=nv, name="I"),
mxAlgebra( expression=solve(sqrt(I*V)), name="iSD"),
mxAlgebra( expression=solve(sqrt(I*V))%&%V, name ="Rph" ),
mxAlgebra( expression=solve(sqrt(I*A))%&%A, name ="Ra" ), #cov2cor()
mxAlgebra( expression=solve(sqrt(I*C))%&%C, name ="Rc" ),
mxAlgebra( expression=solve(sqrt(I*E))%&%E, name ="Re" ),
# Algebra to compute standardized variance components
mxAlgebra( expression=A+C+E, name="V" ),
mxAlgebra( expression=A/V, name="h2"),
mxAlgebra( expression=C/V, name="c2"),
mxAlgebra( expression=E/V, name="e2"),
mxAlgebra(expression=iSD %*% a,name="stda"),
mxAlgebra(expression=iSD %*% c,name="stdc"),
mxAlgebra(expression=iSD %*% e,name="stde")
I am not sure if this is right.
2. Another theory I become aware of is that Cholesky parameterization constrains the A, C, and E covariance matrices must be positive-definite, and direct-symmetric parameterization does not. What's more, when I use Cholesky parameterization, the summary of the model report that:
**Information matrix is not positive definite (not at a candidate optimum).
Be suspicious of these results. At minimum, do not trust the standard errors.**
It seems that my model violate the assumption of Cholesky parameterization. So I turn to use the "direct-symmetric" parameterization with such lines:
baseACE <- mxModel('Base',
mxMatrix( type="Full", nrow=2, ncol=2, free=T, values=svb,labels=labeta, name="Beta"),
mxMatrix( type="Zero", nrow=1, ncol=2, name="meanG" ), # or type="Zero"
# Create Matrices for Variance Components
mxMatrix( type="Symm", nrow=nv, ncol=nv, free=TRUE, values=valDiag(svPa,nv), label=labLower("A",nv), name="A" ),
mxMatrix( type="Symm", nrow=nv, ncol=nv, free=TRUE, values=valDiag(svPa,nv), label=labLower("C",nv), name="C" ),
mxMatrix( type="Symm", nrow=nv, ncol=nv, free=TRUE, values=valDiag(svPa,nv), label=labLower("E",nv), name="E" ),
# Matrices generated to hold A, C, and E components and total Variance
mxAlgebra( expression=A+C+E, name="V" ),
# Algebra to compute total variances and standard deviations (wc_ggonal only)
mxMatrix( type="Iden", nrow=nv, ncol=nv, name="I"),
mxAlgebra( expression=solve(sqrt(I*V)), name="iSD"),
mxAlgebra( expression=solve(sqrt(I*V))%&%V, name ="Rph" ),
mxAlgebra( expression=solve(sqrt(I*A))%&%A, name ="Ra" ), #cov2cor()
mxAlgebra( expression=solve(sqrt(I*C))%&%C, name ="Rc" ),
mxAlgebra( expression=solve(sqrt(I*E))%&%E, name ="Re" ),
# Algebra to compute standardized variance components
mxAlgebra( expression=A/V, name="h2"),
mxAlgebra( expression=C/V, name="c2"),
mxAlgebra( expression=E/V, name="e2")
)
Similarly, I **delete the constraint of A+C+E=1** from the demo codes I found. And then I try to fit AE-ACE, ACE-AE and ACE-ACE without C overlap through such lines:
#C for p1 dropped,
dropcp1 <- omxSetParameters(AceModel,labels="Base.C[1,1]",free=F,values=0,name="dropcp1")
#C for p2 dropped,
dropcp1 <- omxSetParameters(AceModel,labels="Base.C[2,2]",free=F,values=0,name="dropcp2")
#C for p1p2 overlap dropped,
dropcp1 <- omxSetParameters(AceModel,labels="Base.C[1,2]",free=F,values=0,name="dropcp1p2")
Ultimately, the results reported by Cholesky parameterization and direct-symmetric parameterization are similar. So, I decide to use Cholesky parameterization and just delete the constraint of A+C+E=1 as I mentioned above.
I'm not sure if I truely get your point and really lookforward to your comment and suggestions.
Thanks again.
Log in or register to post comments
In reply to remove the constrain by diana
your model is now unidentified
#Start value for off-diagonal is arbitrary
mxMatrix( type="Symm", nrow=nv, ncol=nv, free=c(F,T,F), values=c(1, 0.1, 1), label=labLower("E",nv), name="E" )
That will fix the nonshared-environmental variance in both traits to 1.0. The additive-genetic and shared-environmental variances will then be interpretable as the factor by which they compare to the nonshared-environmental variance. For example, suppose the _A_ variance is estimated at 1.3 and the _C_ variance is estimated at 0.9. The interpretation would be that the _A_ variance is 30% greater than the _E_ variance, and the _C_ variance is 10% less than the _E_ variance. You already know about the MxAlgebras to get "standardized" estimates.
Putting the MxConstraint back into your model would also suffice to identify it.
That's not so. Let me explain what the warning means. When any of OpenMx's 3 main optimizers thinks it's found a local minimum of the -2logL, it checks to see if the gradient (the vector of first partial derivatives of the -2logL with respect to the free parameters)--adjusted for MxConstraints if there are any--is sufficiently close to the origin (all zeroes). That is a first-order condition for the solution being a local minimum. If the gradient is not sufficiently close to the origin, OpenMx warns, with the warning you saw (status code 6). With all-continuous data, status code 6 is definitely a sign something is wrong. But with ordinal-threshold data, sometimes status code 6 is reported even when OpenMx _has_ found a local minimum. That's why `mxTryHardOrdinal()`'s defaults are what they are. The reason that status code 6 can be unavoidable is the limited numerical accuracy of the algorithm that calculates multivariate-normal probability integrals (as I explained in one of the posts I linked earlier in this thread). I only brought up the MxConstraint because it is true that MxConstraints make it harder for the optimizer to find a good solution.
The Cholesky parameterization merely ensures that the _A_, _C_, and _E_ variance-covariance matrices are positive-definite. It doesn't say anything about the information matrix (which in most OpenMx cases is the matrix of second partial derivatives of the -2logL w/r/t the free parameters at the solution). Incidentally, that matrix being positive-definite is a _second_-order condition for the solution being a local minimum.
Log in or register to post comments
In reply to your model is now unidentified by AdminRobK
Thanks for your detailed
1. But I am still confused that why the demo codes([code1](http://ibg.colorado.edu/cdrom2016/maes/UnivariateAnalysis/twoa/twoACEba.R),[code2](https://static1.squarespace.com/static/58b2481a9f7456906a3b9600/t/5bd10731e79c70a4980526ea/1540425521921/twoACEvb.pdf))of binary-binary variables don't fix E variances?
2. Another question(mybe stupid)is I don't know the functions **valDiag** and **labLower** used in the demo codes are contained in which package, and I can't search any informations about these functions.
Thanks!
Log in or register to post comments
In reply to your model is now unidentified by AdminRobK
oh, I see, I need to fix E
and the way I define nested ACE model is ok?
Thanks!
Log in or register to post comments
In reply to your model is now unidentified by AdminRobK
Putting the MxConstraint back
Did you mean I can fix E variance to 1, and at the same time put
mxMatrix( type="Unit", nrow=nv, ncol=1, name="Unv" ),
mxConstraint( expression=diag2vec(V)==Unv, name="VarL" ),
into my model? But they seem contradictory. Or should I constrain A+C+E to another value?
I'm so sorry for troubling you so many times!
Thanks,sincerely!
Log in or register to post comments
model identification again
The interpretation of the parameter estimates is easier if the variances are constrained to 1.0, because (1) the _A_, _C_, and _E_ variance components are variance proportions (i.e., standardized), and (2) the off-diagonals of `V` are tetrachoric correlations.
They're from a collection of helper functions maintained by Hermine Maes (but they really _ought_ to be turned into an R package).
As for your other questions, you need to either fix the _E_ variance to 1, or reintroduce the MxConstraint, but not both. Whichever you choose, you need to do in all of your models with this dataset, because otherwise, the models won't be identified. If the way you're doing your nested models is relevant to your hypotheses / research questions, then it looks OK to me.
Log in or register to post comments
In reply to model identification again by AdminRobK
Many many thanks!
Best wishes to you!
Log in or register to post comments
In reply to model identification again by AdminRobK
Hello again!
I retried to fit my model with
mxTryHard()
instead ofmxTryHardOrdinal()
, and the result was:**eCTCToFit<- mxTryHard(eCTCToModel, intervals=F,extraTries = 25,exhaustive=F,finetuneGradient=FALSE)**
**Retry limit reached; solution not found. Best fit=5350.5814 (started at 10097.602) (26 attempt(s): 26 valid, 0 errors)**
However, when I use
mxTryHardOrdinal()
, the solution can be easily found with only one attempt, although the estimated parameters are slightly different each time I run the same code.(PS, I useset.seed(1)
before the call tomxTryHardOrdinal()
, but the results are still inconsistent and I don't know why)My question is can I trust the result of
mxTryHardOrdinal()
, even thoughmxTryHard()
can't find solution.(´・ᴗ・`)
Log in or register to post comments
In reply to Hello again! by diana
different defaults
Also note that the default for `mxTryHardOrdinal()` is `exhaustive=TRUE`. That's because sometimes it's nearly impossible to avoid status code 5 or 6 with threshold variables. So, since the usual tests for first- and second-order optimality conditions might not be reliable, the sensible thing to do is use up _all_ of the extra attempts, and return the solution with the smallest fitfunction value. I don't advise using `exhaustive=FALSE` unless you at least drop 6 from `OKstatuscodes`.
I'll refer you again to this post of mine. Have you tried lowering 'mvnRelEps' or increasing any of the 'mvnMaxPoints*' options? Also, if you're not using any MxConstraints, have you tried CSOLNP?
Are you running `set.seed()` before running mxTryHard* _every_ time?
Log in or register to post comments
In reply to different defaults by AdminRobK
It's really confusing.(T_T)
1. I truely run
set.seed(1)
firstly whenever runningmxTryHardOrdinal()
andmxTryHard()
, but the numbers in the forth/fifth decimal place of parameters are different, please see picture. Is such slight inconsistency normal?2. After I run this line:
mxOption( NULL, "mvnRelEps", value= mxOption(NULL, key="mvnRelEps")/5)
mxTryHard()
can find solutions, but the results are still instable. It might find solutions after different attempts, or sometimes it still showed"solution not found". I don't know if this is normal,too.> set.seed(190503)
> eCTCToFit<- mxTryHard(eCTCToModel, intervals=F,extraTries = 10,finetuneGradient=FALSE)
Retry limit reached; solution not found. Best fit=5350.5775 (started at 9974.4539) (11 attempt(s): 11 valid, 0 errors)
> set.seed(190503)
> eCTCToFit<- mxTryHard(eCTCToModel, intervals=F,extraTries = 10,finetuneGradient=FALSE)
Solution found! Final fit=5350.5776 (started at 9974.4539) (5 attempt(s): 5 valid, 0 errors)
> set.seed(190503)
> eCTCToFit<- mxTryHard(eCTCToModel, intervals=F,extraTries = 10,finetuneGradient=FALSE)
Solution found! Final fit=5350.5776 (started at 9974.4539) (1 attempt(s): 1 valid, 0 errors)
3. Now it seems that I can still trust the results of
mxTryHardOrdinal()
, sincemxTryHard()
also works and the estimated parameters are relatively stable(unless the parameters should be exactly the same whenever I run the same code).And I need to set
exhaustive=T
, but I still unsure about the number of *extratries*. Because I don't want to spend a lot of time running models(especially I need to fit about 10 nested models), and on the other hand I'm afraid of the small number of attempt I set is not enough to find the best result. If you don't have another suggestions, I would setextratries=15
.At last, maybe such issues I struggled with are really unnecessary, really thanks for your time!
Log in or register to post comments
In reply to different defaults by AdminRobK
the proportion of ace influences
I wanted to calcualte how much of the observational association between two phenotypes is due to A,C and E, and I added these lines to the models:
# the proportion of ace influences contributing to the overlap
mxAlgebra( expression=sqrt(h2[1,1])*sqrt(h2[2,2])*Ra[1,2]/Rph[1,2],name = "pa"),
mxAlgebra( expression=sqrt(c2[1,1])*sqrt(c2[2,2])*Rc[1,2]/Rph[1,2],name = "pc"),
mxAlgebra( expression=sqrt(e2[1,1])*sqrt(e2[2,2])*Re[1,2]/Rph[1,2],name = "pe")
Subsequently I found that if Ra/Rc/Re is negative, then pa/pc/pe is also might be negative, but they represent proportions and should be positive and less than 1. I don't know how to explain.
Thank you very much!
Log in or register to post comments
In reply to different defaults by AdminRobK
I should choose which nested ACE model
I aim to fit nested ACE models by dropping A/C/E step by step and to test if these nested models result in a significant deterioration of the fit(all these models are compared to full ACE model).
As it shows in the uploaded picture, A for smoke, overlap and E for overlap can be dropped from the full ACE model, but the best-fitting model is the model with A for smoke and overlap dropped( model NO.15) . If I choose to report the results of NO.15 model, then the CI of rE (E correlation) contains 0.
So, my question is that which model I should choose, the best-fitting model(model NO.15) or the model dropping all the variances/covariances that can be dropped(model NO.16).
Thanks!
Log in or register to post comments
answers
I'd say they're close enough.
If decreasing 'mvnRelEps' is enabling vanilla `mxTryHard()` to find an acceptable solution, that's a good sign.
If you're comfortable with `extraTries=15`, then that sounds OK to me.
If something isn't bounded to the interval [0,1], then logically, it CAN'T be interpreted as a proportion. Where did you get that block of syntax, anyhow? It's not clear to me what the algebras are trying to compute.
If you're only going to base your conclusions on a single "best" model, then model 15 is the best according to AIC. But, I don't encourage basing your conclusions on one "best" model unless the best is far and away superior to all the others. Instead, consider "multi-model inference". Take a look at the man page for
omxAkaikeWeights
andmxModelAverage()
, and at a relevant publication of mine.Log in or register to post comments
In reply to answers by AdminRobK
Hi! Thanks for your answers.
I calculated the proportions of rA/rC/rE contributing to rph following the formula in the picture. I also saw others calsulated proportions in this way in their papers, such as table 5 in [this paper](https://www.cambridge.org/core/services/aop-cambridge-core/content/view/B619934E6C2A656FA2E601F51E5AE5A2/S1832427416000955a.pdf/associations_between_obesity_indicators_and_blood_pressure_in_chinese_adult_twins.pdf)
Log in or register to post comments
In reply to Hi! Thanks for your answers. by diana
OK, I see what the algebras
Log in or register to post comments
In reply to OK, I see what the algebras by AdminRobK
do you know other ways to calculate pa/pc/pe?
Many thanks!
Log in or register to post comments
In reply to do you know other ways to calculate pa/pc/pe? by diana
You have the correct formula
Log in or register to post comments
In reply to You have the correct formula by AdminRobK
Thank you very much! I see
Log in or register to post comments