I'm currently conducting a meta-analysis for which I have many dependent effect sizes (multiple correlations I want to use, nested within the same sample), so the metaSEM package seemed like an obvious choice for how to analyze these data. However, I also want to take advantage of Stanley & Doucouliagos's (2014) PET-PEESE method of estimating a meta-analytic effect free of publication bias. In a nutshell (for those not familiar), PET-PEESE is a meta-regression model in which effect sizes are regressed onto either their standard errors (PET) or their variances (PEESE), depending on whether or not the PET estimate is significantly different than zero. The intercept of either model is interpreted as the estimated meta-analytic effect when small study effects (i.e., either SE or Var), such as publication bias, are zero.

Although I have no problem specifying either meta-regression model in metaSEM, Stanley & Doucouliagos (2014) seem to be very against random-effects models. From their article:

"In our simulations, excess unexplained heterogeneity is always included; thus, by conventional practice, REE [random-effects estimators] should be preferred over FEE [fixed-effect estimators]. However, conventional practice is wrong when there is publication selection. With selection for statistical significance, REE is always more biased than FEE...This predictable inferiority is due to the fact that REE is itself a weighted average of the simple mean, which has the largest publication bias, and FEE" (p. 69)

My dilemma is that I am not sure how to appropriate incorporate PET-PEESE estimation, when my multilevel approach with metaSEM would appear to demand a random-effects method of estimation.

One creative (?) option that I have been considering is attempting to do a modified bootstrap for my meta-analysis, through which only one effect size per article-based-sample (e.g., only the first of three correlations) would be available to be resampled within any given bootstrapped sample (e.g., the 10th bootstrap sample). For subsequent bootstrapped samples (e.g., the 150th), a different effect size might be selected from within a sample (e.g., the third of three correlations) to be available for resampling. I would run a fixed-effect PET model for each bootstrapped sample, and then construct a 95% CI around those fixed-effect estimates of the intercept, and then repeat the process with the PEESE covariate, if it was determined that I should be using PEESE instead.

Does this approach seem reasonable? Would there be a better way of integrating PET-PEESE with metaSEM? I realize there may not be a simple solution or answer to this inquiry, but I'd appreciate any prods I could get in promising directions.

Thanks!

-John

Dear John,

I am not familiar with the approach proposed by Stanley and Doucouliagos (2014). My suggestions may not be very useful.

Did Stanley and Doucouliagos (2014) suggest that we should use a fixed-effects model in the presence of selection bias? Since most reviewers may not hold this view, you may need to defend your choice of not using the random-effects model.

Regarding your bootstrapping approach, it does not seem to be a standard one. I do not follow the logic. Many readers may also not follow it. Therefore, you may have a hard time to defend it.

Most researchers do not think that we can really estimate the parameters without selection bias. Would it be easier to consider selection bias as a sensitivity analysis?

Mike