Attachment | Size |
---|---|
![]() | 44.06 KB |
Dear all,
I am trying to fit a model (I am not sure if it will be possible) like the one in the attached figure.
I am trying to do it by modifying this script (CP model with one latent factor):
# Fit Common Pathway ACE Model # ------------------------------------------------------------------------------ nl <- 1 # Matrices ac, cc, and ec to store a, c, and e path coefficients for latent phenotype(s) pathAl <- mxMatrix( type="Lower", nrow=nl, ncol=nl, free=TRUE, values=.6, labels=labLower("al",nl), lbound=.00001, name="al" ) pathCl <- mxMatrix( type="Lower", nrow=nl, ncol=nl, free=TRUE, values=.6, labels=labLower("cl",nl), lbound=.00001, name="cl" ) pathEl <- mxMatrix( type="Lower", nrow=nl, ncol=nl, free=TRUE, values=.6, labels=labLower("el",nl), lbound=.00001, name="el" ) # Matrix and Algebra for constraint on variance of latent phenotype covarLP <- mxAlgebra( expression=al %*% t(al) + cl %*% t(cl) + el %*% t(el), name="CovarLP" ) varLP <- mxAlgebra( expression= diag2vec(CovarLP), name="VarLP" ) unit <- mxMatrix( type="Unit", nrow=nl, ncol=1, name="Unit") varLP1 <- mxConstraint( expression=VarLP == Unit, name="varLP1") # Matrix f for factor loadings on latent phenotype pathFl <- mxMatrix( type="Full", nrow=nv, ncol=nl, free=TRUE, values=.2, labels=labFull("fl",nv,nl), name="fl" ) # Matrices A, C, and E compute variance components covA <- mxAlgebra( expression=fl %&% (al %*% t(al)) + as %*% t(as), name="A" ) covC <- mxAlgebra( expression=fl %&% (cl %*% t(cl)) + cs %*% t(cs), name="C" ) covE <- mxAlgebra( expression=fl %&% (el %*% t(el)) + es %*% t(es), name="E" )
And then fixing the loading factor to zero for the first variable. However, I am struggling to figure out how to create the paths for A C and E to the loading factor and the first variable (lines in red).
I would be really grateful if you could help me with this. I have read other threads but I am still confused about how to do it.
Thank you so much in advance.
With all good wishes.
Imagine that there's another latent factor that is indicated only by variable 1. Variable 1's loading onto that new latent factor is fixed to 1.0, and the latent factor's variance is entirely due to the left-hand set of A, C, & E. Then, you'd have two latent factors, and their covariance matrix--say,
covarLP
-- would be a function of the Cholesky-style A, C, & E paths. Then, the model-expected covariance matrix for the 3 traits would be the sum of the common and unique covariance matrices. The common covariance matrix is equal to the matrix of loadings timescovarLP
times the transpose of the matrix of loadings. The unique covariance matrix is a function of the "residual" A, C, & E. Note that, for model identification, you will not be able to freely estimate both loadings onto the right-hand common factor.Alternately, you could always specify your model with MxPaths, in a RAM-type or LISREL-type MxModel. You would still need to make some MxAlgebras to calculate all the quantities of interest, though.
Hi Robert,
Thank you so much for your response. After, thinking a lot about the best model I have tried to fit this model (see attached figure). I think this kind of model gives me all the information that I need.
Here you can see the full script:
However, I am not sure of several things:
1-I fixed the nonshared-environmental variances to 1 in the ACEmodel (you helped me in a different post) but I am not sure how to do it in de CP model (common and specific).
I have used this instead to fix the variance to one
matUnv <- mxMatrix( type="Unit", nrow=nvo, ncol=1, name="Unv1" )
var1 <- mxConstraint(expression=rbind(V[2,2],V[3,3]) == Unv1, name="Var1")
But as I am having problems to get the CI I think it would be better to change the parameterization
2- I am not sure about this but I think I have to fix either the one threshold, or the regression intercept for the dichotomous variables. Am I Wrong?
I think I know how to fix the one threshold but not how to fix the regression intercept.
I have to do the same in the saturated model, right?
Why do I have to fix the variance and one threshold or the intercept? Is there any paper that I could read to learn more about that?
3-When I run the model, I get this warning:
Warning message:
In mxTryHard(model = model, greenOK = greenOK, checkHess = checkHess, :
The model does not satisfy the first-order optimality conditions to the required accuracy, and no improved point for the merit function could be found during the final linesearch (Mx status RED)
I know that it is common with ordinal variables. I have tried with mxTryhardOrdinal and 101 extra tries and different sets of starting values. The point is that I get different values each time (see the attached file), similar estimates but not exactly the same.
4- I am also not sure about this:
Note that, for model identification, you will not be able to freely estimate both loadings onto the right-hand common factor
Should I change anything?
With all good wishes,
Thank you so much in advance!
Hi,
Looking at your figure, you're just wanting a 2 common-factor CP model.
umx implements this (see
?umxCP
)You would then delete the paths you want to delete with
umxModify()
The figure umx creates, the help and the
parameters
function in umx can help you find/understand what the path labels are.Thank you so much all of you again for your help.
I have used UMX (find attached the figure). This is a really good tool!
However, I am not sure how to get the standardized values.
I have used: cp2 = xmu_standardize_CP(cp1) but they do not seem standardized.
I would also like to know how to fix my previous script.
I have modified these lines:
thinG <- mxMatrix( type="Full", nrow=1, ncol=ntvo, free=FALSE, values=0, name="thinG" )
expMZ <- mxExpectationNormal( covariance="expCovMZ", means="expMean", dimnames=selVars, thresholds="thinG", threshnames=ordVars )
expDZ <- mxExpectationNormal( covariance="expCovDZ", means="expMean", dimnames=selVars, thresholds="thinG", threshnames=ordVars )
To fix the Threshold and the variance is fixed to one with this:
matUnv <- mxMatrix( type="Unit", nrow=nvo, ncol=1, name="Unv1" )
var1 <- mxConstraint(expression=rbind(V[2,2],V[3,3]) == Unv1, name="Var1")
And I think the intercept is free:
expMean <- mxAlgebra( expression= meanG + cbind(t(b1%%Age),t(b1%%Age))+ b2*Sex , name="expMean" )
So, I think everything should be OK in the model.
But I still have the Warning message:
In mxTryHard(model = model, greenOK = greenOK, checkHess = checkHess, :
The model does not satisfy the first-order optimality conditions to the required accuracy, and no improved point for the merit function could be found during the final linesearch (Mx status RED)
And I get slightly different values.
I am still confused with this as well:
Note that, for model identification, you will not be able to freely estimate both loadings onto the right-hand common factor
Thank you so much in advance for your help
With all good wishes
Glad you liked umx. The figure you attach is showing standardized values automatically, with leading zeros suppressed. add strip=FALSE to show the decimal place.
You mostly should never need to call an "xmu" function directly (they're all marked "internal, not for end users). So this works for any known model:
cp2 = umx_standardize(cp1)
By default, umxSummary standardizes twin models (and doesn't standardise RAM models). Compare what you get with:
umxSummary(cp1, std=FALSE)
Just that the way you had it drawn, you have two As, Cs, and Es for one latent (and too few manifests). Not identified. (wish images showed up in posts)
devtools::install_github("tbates/umx")
, I'd be happy to get feedback onumxTwinMaker()
, which is designed to make this a whole lot easier. You just sketch one person, and the rules for co-twins and MZ vs. DZThank you so much for your prompt response!
If they are standardized then for my first phenotype:
Ac= 0.77^20.53=0.31
Ec=0.64^20.53=0.22
As= 0.46^2=0.21
Es=0.72^2=0.52
Ptot=0.31+0.22+0.21+0.52=1.26
Atot=0.31+0.21/1.26=0.41
Etot=0.22+0.52/1.26=0.59
(Hope I am not doing something wrong)
These results are slightly different from my univariate analyses (A=32% and E=68%)
However, the ones I got with the full script matches well with my univariate analyses.
Thank you so much for your suggestion I will try with umxTwinMaker()
However, I wonder if there would be a way to get exact results with my script. I have tried with 100 extraTries and the results are very reasonable but I am not 100% confident the script is correct and with values changing from one time to another (small differences)
Thank you so much for explaining me the thing about model specification
With all good wishes.
The results from a multivariate model will often differ from from those of a univariate analysis.
There's more power (the model can see all the phenotypes influenced by a variance component, and use that information in its estimates). Likely, in this case, the univariate model overestimated E because of this.
Thank you so much. I have run my multivariate AE model with and without covariates and the discrepancy is due to them,
Can I add covariates to the CP model in UMX?
I would also like to compare the results with those from my script. Do you think if I run the model with a lot of extraTries, like 200 the results would be accurate?
Any thoughts about how to avoid the warning message? I have tried it with all optimizers and tons of starting values.
Thank you so much! I was really stuck with this.
You'll like umx 3.5, development of which pandemic is accelerating development (home alone)
Plan is by early April, all twin models will support covariates, inc. for thresh vars
Thank you so much for your response.
Looking forward to trying it! Hope it is going to be available soon.
With all good wishes