Testing publication bias when using 3-level meta-analysis
I have read through the metaSEM manual, and almost all of the articles and tutorials that prof. Cheung published on his gihtub but can't seem to find the answers to two relatively unrelated questions, and hope that you will be able to provide guidance.
I am using metaSEM (TSSEM and OSMASEM) to construct a Theory of planned behavior meta-analysis model for my Ph.D. thesis, as well as to conduct several 3-level univariate meta-analyses, having in mind that my data includes several studies that reports data on two independent samples.
**Firstly**, when conducting 3-level meta-analysis, I use the Fisher z-transformation and its sampling variance, and am interested is there an automatic way to back-transform the effect size and confidence interval data from the meta3L summary (something similar to the metafor - *predict* function for *rma.uni*)?
Additionally, would you recomend to instead use "raw" correlation coefficient data, and their sampling variance?
And **secondy**, is there a way to test for publication bias, and to conduct other diagnostic tests (such as outlier diagnostics, etc.) when using 3-level meta-analysis?
Are there analitical counterparts like those available available for regular univariate meta-analysis eg. funnel plots, publication bias diagnostics, pet-peese, etc?
All I can find in the literature (e.g., [Nakagawa et al., 2021](https://besjournals.onlinelibrary.wiley.com/doi/full/10.1111/2041-210X.13724 "Methods for testing publication bias in ecological and evolutionary meta-analyses")) is that (almost) none of these options are available when taking into account data dependency structures. This surprises me since a large number of meta-analysis studies have data dependencies in ther data, therefore one would suspect that only the simplest meta-analysis approaches can then utilize all of the tools that are at our disposal.
Thank you for your time.
Sorry if I expressed myself wrong somewhere, I'm still finding my way through these more complex types of meta-anaysis, as they are still new to me.
Best,
Miljan
EDIT: For simplicity, I edited my post to remove a part of the question that I figured out an answer to.
Hi Miljan, The metaSEM…
Hi Miljan,
Log in or register to post comments
Dear professor Cheung, thank…
Dear professor Cheung,
thank you for your time.
I have just two additional related sub-questions.
My sample consists of about 70 studies. 5/70 are studies that reported data from two independent samples (hierrarchical dependence). Additionaly, another 2/70 studies conducted different questionnaires on the same sample within each of the studies, exploring different aspects of the general behavior I am interested in meta-analyzing, namely, software piracy, movie piracy, music piracy. (correlational dependence, with sampling errors of the effect sizes being dependent).
Firstly, do you think that 5 studies that provided independent samples (=10 individual samples) and 2 studies that provided multiple efefct sizes based on the same study (3+2=5 dependent effect sizes in total) out of 70 studies is enough data to successfully model the second-level within-study variability via the the three-level model? I know that this sums up to app. 60ish clusters for the second-level, but the vast majority of "clusters" have only contributed 1 effect size per study.
Secondly, do you think that one can still use the three-level model, having in mind that the data has multiple types of dependence but in a really small percentage of studies, or would you advise to first deal with the 2 studies that provide multiple within-study dependent effect sizes, eg., by averaging or choosing only one effect size per study, and then conducting the three-level meta-analysis when there is only one type of dependence?
Thank you once again for your time.
Best,
Miljan
Log in or register to post comments
I don't have a definite…
I don't have a definite answer. Here are two points to keep in mind:
1. If the within variance is very small and close to zero, the results will be similar to the conventional random-effects model.
2. You can conduct a sensitivity analysis by comparing results with and without the multilevel model.
Log in or register to post comments
Thank you very much. I tried…
Thank you very much.
I tried openinng a new topic with an unrelated question, but can't seem to after the forum rewamp, so I am asking here and hope this is OK.
For TSSEM and OSMASEM, can you please clarify what is the appropriate way to code correlation matrices that have data missing on the correlation level (e.g. studies that use multiple variables in their models but report correlations only between some).
I read the article that Suzanne Jak and you published in 2018 ("Accounting for Missing Correlation Coefficients in Fixed-Effects MASEM) where you advocate for the - Stage1.OC(), but can' seem to find it in the metaSEM package, only as a standalone function. I also saw that in her book (Meta-Analytic SEM, 2015) a few years prior Ms. Jak stated on page 42. that "for each missing correlation, we have to treat one variable as missing" and provided example code.
I am wondering what is the currently accepted approach in 2024? Do we still need to treat/code one variable as missing?
In multiple newer published papers that used TSSEM/OSMASEM and provided OSF R code I saw both a) studies treated one variable as missing, and b) studies that just left said values in blank in excel and did not do anyting specific to account for the data missing on the correlation level.
Thank you once again.
I value your answers very much.
Best,
Miljan
Log in or register to post comments