Attachment | Size |
---|---|

trialrun2.csv | 6.93 KB |

Dear Sir,

I am in the midst of running the multivariate meta-analysis with moderators included. The attached dataset consists of 63 observations, 6 effect sizes and their respective variances and covariances. There are many missing data as most studies do not report all the 6 effect sizes, in fact there is not one study that reported all the 6 effect sizes. I am able to run the random effects multivariate meta-analysis by imposing a diagonal structure on T2 but when I tried running an analysis with P.f as one of the moderators (there are no missing data on this variable-all studies have reported the P.f), the following error occurred:

Error in lm.fit(x, y, offset = offset, singular.ok = singular.ok, ...) :

0 (non-NA) cases

However, when I ran the multivariate meta-analysis with just the first 3 effect sizes, and the moderator included, the error does not appear. What is exactly causing the error to appear? Is the main problem problem due to the vast number of missing data in the dataset? Is there any other way to solve this issue aside from dropping the number of effect sizes included (which then reduces the number of missing data)?

The following is my syntax,

> m1 <- read.csv(“trialrun2.csv”)

> RE <- Diag(c("0.01*Tau2_1", "0.01*Tau2_2", "0.01*Tau2_3", "0.01*Tau2_4", "0.01*Tau2_5", "0.01*Tau2_6"))

m2 <- meta(y=cbind(z_d, z_a, z_s, z_o, z_f, z_w), v=cbind(Var_d, cov.d.a, cov.d.s, cov.d.o, cov.d.f, cov.d.w, Var_a, cov.a.s, cov.a.o, cov.a.f, cov.a.w, Var_s, cov.s.o, cov.s.f, cov.s.w, Var_o, cov.o.f, cov.o.w, Var_f, cov.f.w, Var_w), x = P.f, data=m1, RE.constraints=RE)

Any help will be appreciated! Thank so much.

Yours sincerely,

GerardCY

Hi Gerard,

The meta() uses lm() to get the starting values for the regression coefficients. For some reasons, there is an error in lm().

The following R code works.

Regards,

Mike

Hi Sir,

Thank you for your help, it worked perfectly fine for all the moderation analyses I have conducted.

However, when I was running another moderator analysis with the final predictor, OpenMx status1 was 5, even after using your syntax for the starting values of the coefficients. I think it is due to the data of the predictor L and thus I changed L to a categorical variable with "0" as 1 and the rest as 0. But the problem still persist. Is there any problem with the data or perhaps because predictor L are highly unbalanced and thus, such a problem occurred? However when i rerun both the models with predictor L as longitudinal and categorical, using the rerun() function, OpenMx status was 0, but there were NA in some of the outputs. I have no problems with all the other predictors except this one. Is there a way to go about doing this?

The following is my syntax:

> RE <- Diag(c("0.01*Tau2_1_1", "0.01*Tau2_2_2", "0.01*Tau2_3_3",

"0.01*Tau2_4_4", "0.01*Tau2_5_5", "0.01*Tau2_6_6"))

> coef.constraints <- paste0("0*Slope_", 1:6, "_1")

> m2 <- meta(y=cbind(z_d, z_a, z_s, z_o, z_f, z_w),

v=cbind(Var_d, cov.d.a, cov.d.s, cov.d.o, cov.d.f,

cov.d.w, Var_a, cov.a.s, cov.a.o, cov.a.f,

cov.a.w, Var_s, cov.s.o, cov.s.f, cov.s.w,

Var_o, cov.o.f, cov.o.w, Var_f, cov.f.w, Var_w),

data=trialrun2, RE.constraints=RE, x=L,

coef.constraints = coef.constraints)

> summary(m2)

> m2 <- rerun(m2)

>( Lo <- ifelse(trialrun2$L=="0", 1, 0) )

> m3 <- meta(y=cbind(z_d, z_a, z_s, z_o, z_f, z_w),

v=cbind(Var_d, cov.d.a, cov.d.s, cov.d.o, cov.d.f,

cov.d.w, Var_a, cov.a.s, cov.a.o, cov.a.f,

cov.a.w, Var_s, cov.s.o, cov.s.f, cov.s.w,

Var_o, cov.o.f, cov.o.w, Var_f, cov.f.w, Var_w),

data=trialrun2, RE.constraints=RE, x=Lo,

coef.constraints = coef.constraints)

> summary(m3)

> m3 <- rerun(m3)

Lastly, just to clarify, since I have constrained T2 into a diagonal structure, can I still interpret the R2 for the mixed effects models?

Thank you so much!

Regards,

GerardCY

Hi Gerard,

The problem appears to be related to some of the effect sizes. The following two effect sizes return an error code.

When you combine them in the same analysis, they return an error code.

There may be many possible reasons for the errors. One of them is the small numbers of studies.

The above errors are associated with the smallest numbers of studies (10).

Regards,

Mike

Dear Sir,

I have some questions regarding the interpretation of the outputs that you have attached (trialrun2.pdf).

Besides reporting the slope estimates, it is advised to report the R2. However, the outputs of the analysis show that most of R2 for the different outcomes are very high (above .9). Is this correct? Has this got to do with the diagonal constraints implemented and thus we should not interpret the R2?

Yours sincerely,

Gerard

Dear Gerard,

I do not have the answer. There could be many reasons: (1) the sample sizes are small; (2) the definition of R2 in meta-analysis is not good; (3) they are affected by the diagonal constraints, etc. By conducting a simulation study, we may be able to study how these factors affect the R2.

Regards,

Mike