Scott Berry dives into adaptive clinical trial design, tackling the confusion around 'spending alpha.' He clarifies the myths that interim analyses jeopardize type I error, emphasizing that alpha can be allocated effectively without sacrificing validity. The discussion includes how a one-sided 2.5% alpha is the norm and examines real trials like SEPSIS-ACT, which successfully utilized multiple interim analyses. With an emphasis on transparency and strategic planning, Berry provides a clear lens on enhancing trial efficiency while maintaining statistical rigor.
37:59
forum Ask episode
web_stories AI Snips
view_agenda Chapters
auto_awesome Transcript
info_circle Episode notes
insights INSIGHT
Adaptive Designs Improve Power
Adaptive designs can increase power and reduce average sample size.
Distributing alpha across interims enables earlier success detection and patient benefit.
volunteer_activism ADVICE
Plan Interim Actions Carefully
Plan and clearly document all interim actions prospectively.
Adjust alpha only when interim actions affect type I error, not for simply looking at data.
question_answer ANECDOTE
SEPSIS-ACT’s Extensive Interims
The SEPSIS-ACT trial had over 20 interim analyses and kept final alpha at 0.025.
It shows multiple data looks can be done without increasing type I error if actions don't affect error.
Get the Snipd Podcast app to discover more snips from this episode
In this solo episode of "In the Interim...", Scott Berry, President and Senior Statistical Scientist at Berry Consultants, addresses deep-rooted confusion in the field of adaptive clinical trial design surrounding the concept of “spending alpha.” Drawing on practical experience and rigorous statistical foundations, Berry addresses the prevailing language and myths that conflate interim analysis with loss of type I error. He clarifies that, with planned and transparent allocation of alpha, interim analyses enable more power with more efficient design, and robust clinical trials—without sacrificing statistical validity. This is a precise and fact-driven examination for those demanding technical clarity, not marketing gloss.
Key Highlights
Explains the basics of hypothesis testing in superiority trials, highlighting why a one-sided 2.5% alpha is the operational standard despite persistent use of two-sided 5% language in clinical protocols.
Refutes the widespread belief that reviewing interim data costs available alpha, making clear that statistical error is not “penalized”—it is allocated, with potential efficiencies in average sample size and, in thoughtfully extended designs, gains in operating characteristics such as power.
Describes real-world examples, including the SEPSIS-ACT (selepressin) trial sponsored by Ferring Pharmaceuticals, which incorporated more than 20 interim analyses while maintaining a pre-specified final alpha of 0.025; underscores the necessity of transparent, prospective design and explicit documentation for regulatory acceptance.
Distinguishes between interim actions—such as futility analyses or response-adaptive randomization, which require no alpha adjustment, and early efficacy analyses, which must be precisely modeled to preserve type I error.
Challenges terminology like “penalty” and “spending alpha,” asserting that imprecise language fosters misunderstanding and leads to missed opportunities in adaptive trial efficiency.
Emphasizes the crucial role of prospective, simulation-based planning and clear protocol definition at every interim, anchoring statistical practice in measured evidence, not historical convention.