Models for examining selective reporting in meta-analysis with dependent effects

Authors

James E. Pustejovsky

Martyna Citkowicz

Megha Joshi

Date

September 21, 2024

Event

Society for Research on Educational Effectiveness Conference

Location

Baltimore, MD

In meta-analyses examining educational interventions, researchers seek to understand the distribution of intervention impacts in order to draw generalizations that can inform theory, practice, and policy-making. Meta-analyses of educational interventions have to deal with multiple methodological complexities that arise because of how primary research studies are designed and reported. One challenge is that many primary studies report multiple relevant ES estimates, such as more than one measure of an outcome construct, at multiple time-points, for multiple versions of an intervention, or for different groups of participants. This leads to a data structure where the ES from a given study are correlated and requires using statistical methods that are appropriate for dependent observations. Meta-analysts now have access to an array of estimation and inference methods that can handle dependent ES, including multi-level meta-analyses, robust variance estimation, and combinations thereof. A second challenge is selective reporting of study results, which occurs when the set of study findings available for meta-analysis is not representative of the full set of relevant evidence. Selective reporting is of particular concern when the availability of study findings is influenced by the magnitude or statistical significance of ES estimates because this leads to systematic bias in meta-analysis results. Many statistical tools have been proposed to investigate selective reporting issues in meta-analytic databases, but few of the existing tools can handle dependent ES. We develop methods for correcting the distortionary biases created by selective reporting, while also accommodating data structures involving dependent ES. Broadly, our strategy is to extend a previously-developed class of models, known as p-value selection models, by adding cluster-robust variance estimators that account for dependent effect size estimates when quantifying the uncertainty in parameter estimates. In this presentation, we will describe our estimation strategy, summarize findings from an extensive simulation study evaluating the performance of the estimators, and demonstrate the methods in an empirical application to the Meta-analysis of Randomized Control Trials with Follow-up.

Back to top