# Bivariate ACE model with moderator

I am currently working on a project on how school achievement moderates intelligence among young adults. My sample consists of 4,084 German twins aged 17 and 23-24, respectively, who have information on both GPA and intelligence. However, the twins' GPAs come from various types of secondary schools so to check whether I need to use school type as a moderator in my GxE analyses, I first want to see what happens if I run separate bivariate variance decomposition analyses for the three school types constraining their parameters equal.

In other words, I would like to set up a three-group program running the bivariate variance decomposition analyses for all three school types at once. I have tried to modify the attached script "twoACEvc_practical_2traits.R" by Lucia Colodro-Conde from this year's IBG workshop. It seems that my modified script runs OK. However, when I look at my results, it is clear that I have messed up something because some of the standardized variance components are larger than |1| and so are some of the genetic and environmental correlations. Can you help me figure out what the problem is in the attached script "GPA-moderation of intelligence (example).R"?

I use the standardized residuals of my two variables of interest as data input.

All the best,

Emilie

## parameterization

That's not necessarily a sign that something's wrong. Your script parameterizes the _A_, _C_, and _E_ covariance matrices like this,

## Create Matrices for Variance Components

covA <- mxMatrix( type="Symm", nrow=nv, ncol=nv, free=TRUE, values=valDiag(svPa,nv), label=labLower("VA",nv), name="VA" )

covC <- mxMatrix( type="Symm", nrow=nv, ncol=nv, free=TRUE, values=valDiag(svPa,nv), label=labLower("VC",nv), name="VC" )

covE <- mxMatrix( type="Symm", nrow=nv, ncol=nv, free=TRUE, values=valDiag(svPa,nv), label=labLower("VE",nv), name="VE" )

, which does not guarantee that any of them are positive-definite, that is, an actual covariance matrix. That allows for things like standardized variance components greater than 1.

Log in or register to post comments

In reply to parameterization by AdminRobK

## Boundary conditions?

Log in or register to post comments

In reply to Boundary conditions? by EmilieRH

## Cholesky

If I understand you correctly, then the answer to that question is "yes". In fact, even under the parameterization you're currently using, you really should set strictly positive lower bounds on the diagonal elements of the _E_ covariance matrix, e.g.,

covE_s1 <- mxMatrix(

type="Symm", nrow=nv, ncol=nv, free=TRUE, values=valDiag(svPa,nv), label=labLower("VE_s1",nv),

#since nv=2...:

lbound=c(1e-4,NA,1e-4),

name="VE_s1" )

But if you want to ensure that the _A_, _C_, and _E_ covariance matrices are positive-definite, and thus interpretable as covariance matrices, the simplest way to do that is to use the Cholesky parameterization. Contrast the two attached scripts, in particular how the _A_, _C_, and _E_ covariance matrices are defined (also see https://hermine-maes.squarespace.com/#/one/ - scripts that are "estimating variance components" use the "direct-symmetric" parameterization [which your current script uses], and scripts that are "estimating path coefficients" use the Cholesky parameterization [which you seem to want to use]). In 'oneACEvc.R', the (for instance) _A_ covariance matrix is defined like this,

covA <- mxMatrix( type="Symm", nrow=nv, ncol=nv, free=TRUE, values=svPa, label="VA11", name="VA" )

, which should look familiar. But, in 'oneACEc.R', the _A_ covariance matrix is defined like this,

pathA <- mxMatrix( type="Lower", nrow=nv, ncol=nv, free=TRUE, values=svPa, labels="a11", lbound=lbPa, name="a" )

covA <- mxAlgebra( expression=a %*% t(a), name="A" )

The Cholesky parameterization imposes what might be called "interpretability constraints". It will make your results more interpretable, but **beware**, it has drawbacks. Primarily, it can prevent your likelihood-ratio test statistics from following their theoretical null distribution (Verhulst et al., 2019). It can also make numerical optimization and confidence-interval calculation more difficult. Over the past few years, there has been much discussion on these forums about the pros and cons of the Cholesky parameterization, so I encourage you to search this website for additional relevant information.

Log in or register to post comments

In reply to Cholesky by AdminRobK

## Thank you!

Log in or register to post comments

In reply to Cholesky by AdminRobK

## Problems with the Cholesky parameterization

I have now tried to modify Dr. Maes' twoACEc.R script, which uses the Cholesky parametrization to estimate a bivariate ACE model. When I just modify it to include my two variables, GPA and IQ, it produces reasonable estimates (the estimates are similar to the ones I obtain using the umxACEv function). However, when I try to take school type into account by building six mxModels (3 school types X MZ & DZ), adding them all to the multigroup function, and running the final model, I still experience the same kind of problems as before as some of my path coefficients are above |1| and some of my genetic and environmental correlations are negative. I have attached my script. Can you help me see what I do wrong?

Log in or register to post comments