I'd like to test whether the variation in the average responses given by participants(the so-called "random intercept" in multilevel model terminology) is different across levels of a specific grouping variable. This seems like something I should be able to do in OpenMx, but I can't figure out how. Specifically, I'm stumped on how to combine multi-group SEM with multi-level SEM, even though I know OpenMx can do each of these separately.

Here's some code that creates a dataset for a reproducible example of the problem I'm facing. The code creates a dataset with 40 participants who each give responses across 20 trials, 10 each in two separate conditions. I want to compare the variance in the participants' average responses across the two conditions.

I've figured out how to fit a multi-level model in OpenMx that estimates the variation in the average participant responses, collapsed across conditions -- I'm just adrift on how to fit multiple multilevel models in the same mxModel, then test whether the variance in the random intercepts ("by_participant.intercept" in the example below) is different across levels of the grouping variable ("condition" in the example below).

library(lme4)

library(OpenMx)

fml <- "~ condition + (condition | participant_id)"

d <- expand.grid(participant_id=1:40, trial_num=1:10)

d <- rbind(cbind(d, condition="control"), cbind(d, condition="experimental"))

set.seed(23432)

d <- cbind(d, simulate(formula(fml),

newparams=list(beta=c(0, .5),

theta=c(.5, 0, 0),

sigma=1),

family=gaussian,

newdata=d))

by_participant <- mxModel(

model="by_participant", type="RAM",

latentVars="intercept",

mxData(data.frame(participant_id=unique(d$participant_id)), type="raw", primaryKey="participant_id"),

mxPath(from="intercept", arrows=2, values=1)

)

overall_model <- mxModel(

model="overall_model", type="RAM", by_participant,

manifestVars="sim_1",

mxData(d, type="raw"),

mxPath(from="one", to="sim_1", arrows=1, free=TRUE),

mxPath(from="sim_1", arrows=2, values=1),

mxPath(from="by_participant.intercept", to="sim_1", arrows=1, free=FALSE, values=1, joinKey="participant_id")

)

fit <- mxRun(overall_model)

summary(fit)

Does this look correct?

It looks like what you did was create two submodels, control and e1, that each link to a participant_id submodel, then joined the models together with mxFitFunctionMultigroup() -- is this correct?

I have two questions:

(1) Is it a problem that condition changes within-subjects (i.e., the two groups are not two

independentgroups)?(2) Using the fit object that results from these models, is there an easy way to compare the variance in the participant intercepts in the control and experimental conditions? I believe the parameters are named p1.S[1,1] and p2.S[1,1] in the summary output from summary(fit), but I'm not certain

> It looks like what you did was create two submodels, control and e1, that each link to a participant_id submodel, then joined the models together with mxFitFunctionMultigroup() -- is this correct?

Yes

> (1) Is it a problem that condition changes within-subjects (i.e., the two groups are not two independent groups)?

Yes, that's a problem. If condition changes within-subjects then your data are cross-classified. If your dataset is small then OpenMx can still handle it, but not very efficiently. If you data are large then OpenMx may be too slow.

> (2) Using the fit object that results from these models, is there an easy way to compare the variance in the participant intercepts in the control and experimental conditions? I believe the parameters are named p1.S[1,1] and p2.S[1,1] in the summary output from summary(fit), but I'm not certain

Yes, p1.S[1,1] and p2.S[1,1] are the variance of the participant intercepts by participant for each condition. You can compare them using the likelihood ratio test. You'd need to fit a comparison model with these parameters constrained equal. See mxCompare.