
The Macroscience Podcast Metascience 101 - EP5: "How and Why to Run an Experiment"
Oct 9, 2024
Join Professors Heidi Williams and Paul Niehaus, along with Emily Oehlsen from Open Philanthropy and Jim Savage from Schmidt Futures, as they unravel the art of experimentation in metascience. Discover how careful evaluation can inform policy-making and improve research quality. They tackle the complexities of impact evaluations, share insights from the RISE program on identifying talent through innovative methods, and discuss the importance of evidence-based practices in philanthropy for real-world change.
AI Snips
Chapters
Transcript
Episode notes
Define Objectives, Pick Metrics, Then Randomize
- Paul Niehaus advised clearly define your objective and choose good metrics before running impact evaluation.
- Then use counterfactual reasoning (randomization or credible quasi-experiments) to infer true impact.
Spend On Outcomes, Not Randomization
- Randomization itself is cheap; outcome measurement usually drives cost and time.
- Design experiments sized large enough to give decision-makers the confidence they need and avoid underpowered trials.
Experiments Clarify Where Results Apply
- External validity is often portrayed as an experiment problem but non-experimental methods can have harder-to-diagnose representativeness issues.
- Experiments make clear which population results apply to and where extrapolation is needed.
