
Everything Hertz 188: Double-blind peer review vs. scientific integrity
Jan 30, 2025
A lively debate about whether double-blind peer review helps or harms scientific integrity. They weigh how masking authors can fail in practice and when it might still reduce prestige bias. The conversation covers data availability, editor responsibility, paper-mill risks, language bias, and practical fixes like ORCID, blinded invites, and publishing review reports.
AI Snips
Chapters
Transcript
Episode notes
Reviewers Can't Routinely Audit Raw Data
- Requiring reviewers to inspect raw data during review is unrealistic for most fields because reviewers lack time, incentive, or standardized data formats.
- Quintana and Heathers argue post-publication data availability plus reproducible environments are more practical for forensic checks.
Share Data Via Anonymous Links
- Share data anonymously during review using services like OSF with anonymous links to preserve blinding.
- Quintana notes this enables data access while maintaining author anonymity if set up correctly.
Automated Checks Can Catch Paper Mills
- Paper mills, fake affiliations, and citation manipulation are growing problems that reviewers or editors should detect with automated tools.
- Heathers argues editorial-level automation (like plagiarism flags) can similarly flag suspicious submissions before review.
