Attachment | Size |
---|---|
R script.R [6] | 2.57 KB |
Dear Mike and all,
First off, thanks for this great forum and for directing me to ask questions here!
I used the metasem package to test some simple mediation models (IV->Med->DV, with the direct effect included). The script is attached.
Results of stage 1 looked fine:
summary(stage1)
##
## Call:
## meta(y = ES, v = acovR, RE.constraints = Diag(paste0(RE.startvalues,
## "*Tau2_", 1:no.es, "_", 1:no.es)), RE.lbound = RE.lbound,
## I2 = I2, model.name = model.name, suppressWarnings = TRUE,
## silent = silent, run = run)
##
## 95% confidence intervals: z statistic approximation (robust=FALSE)
## Coefficients:
## Estimate Std.Error lbound ubound z value Pr(>|z|)
## Intercept1 0.40038970 0.02796976 0.34556997 0.45520943 14.3151 < 2.2e-16 ***
## Intercept2 0.19706054 0.02504600 0.14797128 0.24614980 7.8679 3.553e-15 ***
## Intercept3 0.22359242 0.03632058 0.15240539 0.29477944 6.1561 7.457e-10 ***
## Tau2_1_1 0.02692037 0.00683628 0.01352151 0.04031923 3.9379 8.221e-05 ***
## Tau2_2_2 0.02431705 0.00635049 0.01187031 0.03676379 3.8292 0.0001286 ***
## Tau2_3_3 0.01545611 0.00749157 0.00077291 0.03013932 2.0631 0.0390998 *
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Q statistic on the homogeneity of effect sizes: 998.5511
## Degrees of freedom of the Q statistic: 99
## P value of the Q statistic: 0
##
## Heterogeneity indices (based on the estimated Tau2):
## Estimate
## Intercept1: I2 (Q statistic) 0.9629
## Intercept2: I2 (Q statistic) 0.9413
## Intercept3: I2 (Q statistic) 0.9174
##
## Number of studies (or clusters): 102
## Number of observed statistics: 102
## Number of estimated parameters: 6
## Degrees of freedom: 96
## -2 log likelihood: -69.77645
## OpenMx status1: 0 ("0" or "1": The optimization is considered fine.
## Other values may indicate problems.)
Yet results of stage 2 modeling looked strange, with NA values of standard errors, z values, and p-values:
summary(stage2)
##
## Call:
## wls(Cov = pooledS, aCov = aCov, n = tssem1.obj$total.n, RAM = RAM,
## Amatrix = Amatrix, Smatrix = Smatrix, Fmatrix = Fmatrix,
## diag.constraints = diag.constraints, cor.analysis = cor.analysis,
## intervals.type = intervals.type, mx.algebras = mx.algebras,
## mxModel.Args = mxModel.Args, subset.variables = subset.variables,
## model.name = model.name, suppressWarnings = suppressWarnings,
## silent = silent, run = run)
##
## 95% confidence intervals: Likelihood-based statistic
## Coefficients:
## Estimate Std.Error lbound ubound z value Pr(>|z|)
## c 0.128067 NA 0.058912 0.195726 NA NA
## b 0.172316 NA 0.083865 0.260299 NA NA
## a 0.400390 NA 0.345569 0.455256 NA NA
##
## mxAlgebras objects (and their 95% likelihood-based CIs):
## lbound Estimate ubound
## Indirect[1,1] 0.03369740 0.06899341 0.1071547
## Direct[1,1] 0.05891188 0.12806712 0.1957264
##
## Goodness-of-fit indices:
## Value
## Sample size 88066.00
## Chi-square of target model 0.00
## DF of target model 0.00
## p value of target model 0.00
## Number of constraints imposed on "Smatrix" 0.00
## DF manually adjusted 0.00
## Chi-square of independence model 304.72
## DF of independence model 3.00
## RMSEA 0.00
## RMSEA lower 95% CI 0.00
## RMSEA upper 95% CI 0.00
## SRMR 0.00
## TLI -Inf
## CFI 1.00
## AIC 0.00
## BIC 0.00
## OpenMx status1: 0 ("0" or "1": The optimization is considered fine.
## Other values indicate problems.)
My questions are:
(1) how could this happen? The model is simple. Could this result from too many missing values in my correlation matrixes? Many of my cases have incomplete correlations. But given the relatively large sample of many cases (see my sample size info below), I thought it wouldn't be an issue. I also tried to impute the missing values. but the results got worse, with now an optimization issue in stage 1 (score of 5).
pattern.na(my.cor, show.na = FALSE)
## DC M1 DV1
## DC 102 38 49
## M1 38 102 15
## DV1 49 15 102
(2) without any estimates of standard errors, I suppose the reported results of parameter estimates wouldn't make any sense, right? How to improve this modeling, if possible?
Any suggestions or advice to fix this? Thank you for your time!
Yingjie