The Flaws of Academic Statistics: the Null Ritual
Mar 13, 2019
Delve into the world of faulty academic statistics and discover the 'Null Ritual,' a problematic practice plaguing research. The hosts dissect the significance of p-values and their mishandling, revealing how this binary approach has transformed into a cult-like reverence. You'll learn about the conflicting views of statisticians like Fisher and Neyman, and how their ideas morphed into a confusing ritual that promotes misleading results. Uncover the consequences of p-hacking and the alarming trend of false findings in published research.
AI Snips
Chapters
Books
Transcript
Episode notes
The Binary Trap Of Significance
- "Statistical significance" in practice became a near-religious binary label around arbitrary cutoffs like 0.05.
- Small differences around the cutoff (e.g., .049 vs .051) are treated as qualitatively different despite being essentially continuous.
Fisher's Goal: Make Induction Deductive
- Ronald Fisher developed p-values to integrate inductive results into deductive reasoning and tackle the problem of induction.
- For Fisher, the null was a hypothesis to challenge and p-values were continuous evidential measures, not rigid decision rules.
Neyman–Pearson: Decisions, Not Proofs
- Neyman and Pearson framed testing as a decision procedure focused on long-run error rates and costs of Type I/II errors.
- Their framework requires pre-specified alpha/beta and sample-size planning tied to behavioral decisions, not epistemic proof.


