Meta-analysis and many-analysts
Roadmap
- Meta-analysis & many analysts
- Reminder
- Exercise 04: Replication distributed. Due Thursday, March 30.
Resources for final projects
McManus, K. (2022). Are replication studies infrequent because of negative attitudes?: Insights from a survey of attitudes and practices in second language research. Studies in Second Language Acquisition, 44(5), 1410–1423. https://doi.org/10.1017/S0272263121000838.
- While this is technically also based on Retraction Watch, several reference managers (definitely Zotero and Endnote, maybe also Papers/Readcube) will automatically flag retracted items in your library now. Judging by social media reactions, that’s a pretty effective way to alert students and researchers: certainly more so than anything that requires actively looking for the information. Might be fun to have students, or at least those using one of those tools anyway, test how this looks by importing a retracted study.
Edlund, J. E., Okdie, B. M. & Scherer, C. R. (2022). Best practices for considering retractions. Current Psychology. https://doi.org/10.1007/s12144-022-03764-x
Meta-analysis
- Multiple studies, ideally published and unpublished (why?)
- What is the distribution of effect sizes?
Agteren et al. (2021)
van Agteren, J., Iasiello, M., Lo, L., Bartholomaeus, J., Kopsaftis, Z., Carey, M. & Kyrios, M. (2021). A systematic review and meta-analysis of psychological interventions to improve mental wellbeing. Nature Human Behaviour. https://doi.org/10.1038/s41562-021-01093-w.
Our current understanding of the efficacy of psychological interventions in improving mental states of wellbeing is incomplete. This study aimed to overcome limitations of previous reviews by examining the efficacy of distinct types of psychological interventions, irrespective of their theoretical underpinning, and the impact of various moderators, in a unified systematic review and meta-analysis. Four-hundred-and-nineteen randomized controlled trials from clinical and non-clinical populations (n = 53,288) were identified for inclusion. Mindfulness-based and multi-component positive psychological interventions demonstrated the greatest efficacy in both clinical and non-clinical populations. Meta-analyses also found that singular positive psychological interventions, cognitive and behavioural therapy-based, acceptance and commitment therapy-based, and reminiscence interventions were impactful. Effect sizes were moderate at best, but differed according to target population and moderator, most notably intervention intensity. The evidence quality was generally low to moderate. While the evidence requires further advancement, the review provides insight into how psychological interventions can be designed to improve mental wellbeing.
Silberzahn et al. (2018)
Abstract
Twenty-nine teams involving 61 analysts used the same data set to address the same research question: whether soccer referees are more likely to give red cards to dark-skin-toned players than to light-skin-toned players. Analytic approaches varied widely across the teams, and the estimated effect sizes ranged from 0.89 to 2.93 (Mdn = 1.31) in odds-ratio units. Twenty teams (69%) found a statistically significant positive effect, and 9 teams (31%) did not observe a significant relationship. Overall, the 29 different analyses used 21 unique combinations of covariates. Neither analysts’ prior beliefs about the effect of interest nor their level of expertise readily explained the variation in the outcomes of the analyses. Peer ratings of the quality of the analyses also did not account for the variability. These findings suggest that significant variation in the results of analyses of complex data may be difficult to avoid, even by experts with honest intentions. Crowdsourcing data analysis, a strategy in which numerous research teams are recruited to simultaneously investigate the same research question, makes transparent how defensible, yet subjective, analytic choices influence research results.
Odds ratios (OR) Szumilas (2010) {-}
- OR < 1: Outcome less likely than comparison
- OR = 1: Outcome and comparison equally likely
- OR > 1: Outcome more likely than comparison