
Planet Money Don't hate the replicator, hate the game
123 snips
Feb 27, 2026 A scientist created an international, crowdsourced contest to re-run social science studies and test whether published results hold up. Teams race to execute original code, probe robustness, and spot missing variables or duplicated data. The story explores publication incentives, p-hacking, and how community scrutiny could change research norms.
AI Snips
Chapters
Transcript
Episode notes
Graduate Student Finds Himself P-Hacking
- Abel Brodeur discovered p-hacking while writing a master's paper on smoking bans when his initial analysis showed no effect but a tweaked subset produced a significant result.
- He ultimately refused the tortured result and reported no effect, which sparked his concern about incentives in academia.
The 5% Bump Signals Systemic P-Hacking
- Abel and colleagues scraped published papers and found a hump of results just above the 5% significance threshold, suggesting selective reporting.
- That pattern indicates researchers may be trimming analyses to cross the conventional significance cutoff to get published.
Fake Institute Opened Doors To Real Data
- To get authors to share code, Abel created a Potemkin-like Institute for Replication website with famous names and a logo to appear legitimate.
- The facade helped him obtain replication packages and recruit collaborators for joint papers.
