Clustered bootstrapping for selective reporting models in meta-analysis with dependent effects
In many fields, quantitative meta-analyses involve dependent effect sizes, which occur when primary studies included in a synthesis contain more than one relevant estimate of the relation between constructs. When using meta-analysis methods to summarize findings or examine moderators, analysts can now apply well-established methods for handling dependent effect sizes. However, very few methods are available for examining publication bias issues when the data also include dependent effect sizes. Furthermore, applying existing tools for publication bias assessment without accounting for effect size dependency can produce misleading conclusions (e.g., too-narrow confidence intervals, hypothesis tests with inflated Type I error). In this presentation, we explore a potential solution: clustered bootstrapping, a general-purpose technique for quantifying uncertainty in data with clustered structures, which can be combined with many existing analytic models. We demonstrate how to implement the clustered bootstrap in combination with existing publication bias assessment techniques like selection models, PET-PEESE, trim-and-fill, or kinked meta-regression. After providing a brief introduction to the theory of bootstrapping, we will develop and demonstrate example code using existing R packages, including boot
and metafor
. Time permitting, we will also share findings from ongoing methodological studies on the performance of clustered bootstrap selection models.