# Summary Output Interpretation

2 posts / 0 new
Offline
Joined: 12/05/2012 - 18:09
Summary Output Interpretation

Hello Forum Members,

I have run into another point of confusion in my inaugural use of the MetaSEM package for a MASEM project. I posted one brief question a few months ago and received an excellent and helpful answer. I'm hoping again to benefit from the forum's deeper experience using the MetaSEM package.

My questions all concern the interpretation of the summary output generated in Stage 1 and Stage 2 of the TSSEM method. I assume that answers to the questions below might make it into a future version of the MetaSEM User Manual, since I don't think they are clearly available in the current user manual and associated literature.

To give some brief context, I am using MetaSEM to perform a TSSEM analysis using a Random Effects model. I am using 10 variables extracted from the correlation tables of about 20 different studies. Both tssem1 and tssem2 run without errors.

Stage 1 Questions:

1) Why do some of the Tau estimates list "NA" in the Standard Error, Lower and Upper Bounds, and z statistic columns?

2) More importantly, how is one to interpret the listing of the Intercept and Tau estimates? For example, I am using 10 variables in my model, which produces 45 Intercept and 45 Tau estimates (since I'm outputting the by-default correlation matrix). These estimates are labeled Intercepts 1-45 (and the variance components Tau 1-1 through 45-45) in the output. Which estimates correspond to which cells in a pooled correlation matrix?

Stage 2 Questions:

1) When using diag.constraints = TRUE, am I right to assume that the parameters estimated are standardized betas? And does this change when outputting a covariance matrix in stage 1 (and accordingly setting diag.constraints = FALSE)?

2) As an add-on to the previous question: Having run my model both with diag.constraints set to TRUE and to FALSE, the parameters estimated are close but not identical. If one of these runs is producing standardized betas and the other is not (per the answer to the prior question), I assume the reason they are close is that the original input values from the primary studies were all standardized correlation coefficients. Is this a correct assumption?

Offline
Joined: 10/08/2009 - 22:37
Stage 1 questions: 1. Since

Stage 1 questions:
1. Since there is no data and output available, I can not make a guess. One possible reason is that there is not enough data. Even if there is no missing data, there are totally 10*9/2=45 fixed-effects and 45 random effects (if you are using
RE.type="Diag" argument). I am not sure if 20 studies are sufficient to 90 parameters.

The metaSEM package does not do the analysis. It relies the OpenMx package to do the heavy work. If there is no standard error reported from the OpenMx, the metaSEM package will report NA. You may check the summary in the OpenMx format, e.g.,
random1 <- tssem1(Digman97$data, Digman97$n, method="REM", RE.type="Diag")

## Use OpenMx's summary()

summary(random1\$mx.fit)

1. The Intercept and Tau refer to the fixed-effects (pooled correlation elements) and their random effects, respectively. If the pooled correlation matrix is R, Intercept=vechs(R). vechs() is arranged by column major (http://openmx.psyc.virginia.edu/docs/openmx/latest/_static/rdoc/vechs.html). For example,

R <- diag(5)

## Select the fixed effects only

R[lower.tri(R)] <- coef(random1, select="fixed")
R <- R+t(R)
diag(R) <- 1
R

## Variance component of the random effects

Var <- diag(coef(random1, select="random"))

Stage 2 questions:
1. Extra care is required when we are analyzing correlation matrix. We have to ensure that the diagonals of the implied model are always 1. "diag.constraints=TRUE" should only be applied when you are pooling correlation matrices. Since you are analyzing the correlation matrix, the coefficients are, by definition, standardized coefficients.

2) The simple answer is yes.