

In the Interim...
Berry
A podcast on statistical science and clinical trials.
Explore the intricacies of Bayesian statistics and adaptive clinical trials. Uncover methods that push beyond conventional paradigms, ushering in data-driven insights that enhance trial outcomes while ensuring safety and efficacy. Join us as we dive into complex medical challenges and regulatory landscapes, offering innovative solutions tailored for pharma pioneers. Featuring expertise from industry leaders, each episode is crafted to provide clarity, foster debate, and challenge mainstream perspectives, ensuring you remain at the forefront of clinical trial excellence.
Explore the intricacies of Bayesian statistics and adaptive clinical trials. Uncover methods that push beyond conventional paradigms, ushering in data-driven insights that enhance trial outcomes while ensuring safety and efficacy. Join us as we dive into complex medical challenges and regulatory landscapes, offering innovative solutions tailored for pharma pioneers. Featuring expertise from industry leaders, each episode is crafted to provide clarity, foster debate, and challenge mainstream perspectives, ensuring you remain at the forefront of clinical trial excellence.
Episodes
Mentioned books

Sep 1, 2025 • 42min
The Mystery of Clinical Trial Simulation
Dr. Scott Berry hosts this episode of "In the Interim…", opening with statistical analysis of elite athletes before focusing on the misunderstood role of clinical trial simulation. He distinguishes simulation as a predictive tool from its use as an in-silico process that enables trial design exploration, iteration, and optimization. Clinical trial simulation provides a mechanism for iterative comparison of multiple designs, driven by ongoing team feedback and evolving trial objectives. Scott stresses that rigid simulation plans are “not productive,” since the most effective designs typically emerge when stakeholders view real trial examples and suggest new design options in real time. The ICECAP trial serves as a key illustration, where the final design was shaped by simulation-informed team input across multiple iterations, from three tested durations to ten with response adaptive randomization. Scott also discusses the creation of the FACTS software, highlighting its ability to test alternative designs rapidly, present side-by-side comparisons, and conduct counterfactual analyses—revealing what different trial configurations would have produced using the same simulated datasets.Key HighlightsSimulation contrasted as a predictive tool versus engine for iterative design evaluation.Emphasizes design process as team-driven and iterative, not prescriptive.Use of concrete example trials enhances communication across multidisciplinary teams.FACTS software enables design flexibility, in silico iteration, and comparative scenario analysis.ICECAP trial as an instance of simulation-informed design adaptation.For more visit: https://www.berryconsultants.com/

Aug 25, 2025 • 38min
Discussions on the ICH E20 Draft Guidance
Kurt Viele, a partner at Berry Consultants and expert in adaptive clinical trial designs, dives into the ICH E20 draft guidance's implications. He emphasizes the need for clear justification in adaptive trials and explores the distinctions between Bayesian and frequentist methods. The discussion highlights real-world applications, like the Sepsis ACT trial, and critiques areas of the guidance that may pose challenges. With practical insights on regulatory dialogue, operational biases, and future trends, Kurt offers valuable advice for researchers navigating this evolving landscape.

Aug 18, 2025 • 45min
A Discussion with Michael Proschan on Response-Adaptive Randomization
In this episode of "In the Interim…", Dr. Scott Berry and NIH’s Dr. Michael Proschan conduct a detailed discussion from opposing viewpoints on response-adaptive randomization (RAR) in clinical trials. The discussion focuses on where they agree – on the positives and negatives of RAR, and where they disagree on its scientific use. Key HighlightsPotential issues of using RAR: Potential temporal trends, unblinding, reduction in statistical efficiency in 2-arm trialsPotential benefits include improved statistical efficiency in multi-arm trials depending on the goals (e.g. dose-finding trials).Potential unblinding of results in non-blinded trials and the need for operational excellence.Ethical and Bayesian perspectives are considered, but emphasis remains empirical.For more visit: https://www.berryconsultants.com/

Aug 11, 2025 • 34min
STEP Statistical Modeling
In this episode of "In the Interim…", Dr. Scott Berry, Dr. Elizabeth Lorenzi, and Dr. Amy Crawford discuss the STEP platform trial’s statistical methodology for evaluating which acute stroke patients benefit and which do not from endovascular therapy (EVT). The discussion critiques the inadequacy of traditional clinical trials powered for a single population to show benefit, as the goal of the trial is to identify who benefits, not if the entire population has a net benefit. The team walks through the development and simulation of a Bayesian change point model, addressing heterogeneous treatment responses across the NIH Stroke Scale. The adaptive platform design leverages scheduled interim analyses to draw timely, data-driven conclusions about patient subgroups, improving trial efficiency and relevance. The episode also previews scaling to two-dimensional modeling, incorporating both stroke severity and time since last known well, and emphasizes ongoing clinical trial simulation and close integration between clinicians and statisticians throughout trial design and execution.Key HighlightsSTEP platform master protocol and the NIH StrokeNet collaborative infrastructureClinical rationale for Bayesian change point modeling of the effect of EVT across the patientsShift from single to dual change point models to reflect regions of equivalenceDevelopment of custom C code and MCMC samplers due to limits of standard toolsInterim analyses direct adaptive enrollment and define actionable conclusionsFuture extensions to multidimensional change point curves modeling

Aug 4, 2025 • 44min
Bayesian Approach in Clinical Trials
This episode of "In the Interim…" features Dr. Scott Berry, Dr. Kert Viele, and Dr. Melanie Quintana of Berry Consultants dissecting the technical and operational landscape of Bayesian statistics in clinical trial design. The episode discussed what is Bayesian statistics, the impact of informative and non-informative priors, and clarifies when and why Bayesian approaches surpass frequentist analyses—especially in adaptive, platform, and rare disease trial settings. The discussion directly challenges the misconception that Bayesian methods “lower the bar," presenting evidence that they often require broader data synthesis and can raise evidentiary standards.Key regulatory developments at FDA and EMA are reviewed, with attention to updated guidance and increased adoption. Case studies illustrate Bayesian methods in practice, including the prospectively combined phase 2 and 3 analysis for REBYOTA approval; hierarchical modeling in GNE myopathy; shared controls and endpoint integration in the HEALEY ALS Platform Trial; and robust subgroup borrowing in the ROAR basket trial. The team also addresses technical challenges such as multiplicity, subgroup analysis, complexity in endpoint modeling, and appropriate strategies for blending Bayesian and frequentist approaches for maximum regulatory and scientific clarity.Key HighlightsClear explanation and real-world examples of Bayesian analysis in clinical trials.Theoretical and practical distinctions from frequentist methodsPractical breakdown of control sharing, endpoint integration, and subgroup borrowing.Regulatory position and the increasing acceptance of Bayesian trial designs and analyses.Case examples: REBYOTA, GNE myopathy, HEALY ALS Platform Trial, ROAR basket trial.

Jul 28, 2025 • 39min
The Time Machine
Dr. Scott Berry and Dr. Kert Viele discuss the origins and implementation of the “time machine” modeling approach, beginning with sports analytics and progressing to adaptive platform clinical trials. The episode focuses on how techniques for comparing athletes across eras translate into methodology for platform trials. Key HighlightsSports analytics as foundation: Early work of modelling athlete comparisons across eras using bridging methodologies.Platform trial application: The time machine model in I-SPY 2 enabled efficient control allocation through overlapping arms over extended trial periods.Core modeling principles: Additive treatment effect assumptions and the necessity of sufficient temporal overlap for reliable era comparisons.Statistical implementation: Approaches include categorical era adjustment and Bayesian smoothing splines for modeling change over time.Limitations and disease specificity: In conditions with rapid clinical or epidemiologic change, such as COVID-19, non-concurrent controls are avoided due to high risk of era by treatment interaction.Regulatory and methodological distinction: The model leverages within-trial overlapping data collected under a unified protocol, contrasting sharply with external or historical controls.

Jul 21, 2025 • 26min
The Legend of I-SPY 2 - Part B
In this episode, Dr. Don Berry and Dr. Scott Berry provide an in-depth account of I-SPY 2, focusing on the trial’s use of the “time machine” methodology—a Bayesian solution allowing bridging across arms to inform ongoing analyses. The discussion details how predictive probabilities and adaptive randomization shaped pivotal decisions, including the handling of Pertuzumab’s approval and Neratinib’s subtype-specific performance. This episode also documents the technical and operational contributions of Laura Esserman, Anna Barker, Janet Woodcock, Meredith Buxton, and Ashish Sanil, clarifying the roles that enabled the platform’s success and broader impact on subsequent adaptive trials.Key HighlightsIntroduction of the “time machine” concept, enabling valid comparison between experimental and control arms even when enrollment periods differ—a pragmatic solution originally utilized in sports examples for evolving platform trials as treatments and control arms change.Ongoing trial conduct driven by a Bayesian adaptive algorithm, developed and maintained by Berry Consultants statisticians, which computes predictive probabilities to guide arm graduation, futility, and real-time adjustment of randomization probabilities.Neratinib serves as a case study in subtype-specific adaptive randomization: the platform set randomization probability to zero in subtypes without signal, while effective subtypes increased randomization and advanced to graduation.I-SPY 2’s methodologies shaped subsequent adaptive platform trials (GBM AGILE, Precision Promise, COVID-19 ACTIV networks), with regulatory acceptance reflected in FDA guidance and Janet Woodcock’s public recognition of adaptive randomization as “adequate and well controlled” for registration studies.Specific recognition: Laura Esserman (trial leadership), Anna Barker (funding and strategic input), Janet Woodcock (FDA guidance and adaptive methods support), Meredith Buxton (logistics; GCAR leadership), and Ashish Sanil (Berry Consultants; ongoing algorithm implementation).

Jul 14, 2025 • 40min
The Legend of I-SPY 2 - Part A
In Episode 20 of Berry’s "In the Interim..." Podcast, The Legend of I-SPY 2 - Part A, Dr. Don Berry and Dr. Scott Berry discuss the origins and design of the I-SPY trials. Their conversation explains the inefficiency of traditional adjuvant breast cancer trials and details the shift to the neoadjuvant approach, where tumor response can be observed prior to surgery. I-SPY 1 served as a proof-of-concept using MRI for probabilistic prediction of pathologic complete response (pCR). I-SPY 2 represents a major advancement in clinical trial science, introducing a multi-arm bandit methodology, integration of biomarker-driven subtypes and signatures, and a structured funding model that transitioned from philanthropy to “pay to play” industry support.

Jul 7, 2025 • 41min
The STEP Platform with Dr. Eva Mistry and Dr. Jordan Elm
This episode of "In the Interim..." features an in-depth discussion of the StrokeNet Thrombectomy Endovascular Platform (STEP), a multi-domain, multi-factorial, adaptive platform trial for acute stroke, anchored in the NIH StrokeNet network. Guests Dr. Eva Mistry (University of Cincinnati) and Dr. Jordan Elm (Medical University of South Carolina) join us to explain how STEP enables simultaneous investigation of multiple treatment strategies in patients with acute ischemic stroke. The conversation details the use of a master protocol, the integration of industry partners through the Other Transactional Authority (OTA) NIH mechanism, and innovative statistical designs to efficiently identify improved treatment strategies.Key Highlights:STEP utilizes a master protocol within NIH StrokeNet, unifying eligibility, procedures, and data collection across all study domains.The platform supports multiple research questions.In an initial domain STEP applies a statistical change-point model to empirically estimate the thresholds where EVT is effective, neutral, or potentially deleterious based on medium vessel occlusions and baseline clinical status. Protocols may be adapted in response to new external data, including pausing and revising enrollment in specific subpopulations when emerging science warrants.Shared control groups are used wherever applicable, improving trial efficiency by reducing the number of patients allocated to control arms and allowing eligible patients to contribute to multiple domains when protocol and scientific rationale permit.

Jun 30, 2025 • 39min
A Statistician reads JAMA
Dr. Scott Berry applies a statistician’s review of a random trial result published in JAMA – the FAIR-HF2 clinical trial. Interrogating the frequentist paradigm and the focus on the binary outcome of the primary hypothesis test. He scrutinizes the Hochberg multiplicity adjustment, challenges the prevailing disregard for accumulated scientific evidence, and contrasts the limitations of black/white view of clinical trial of over 1000 patients and 6 years of enrollment. A contrast is made to what a potential Bayesian approach, grounded in practical trial interpretation and evidence integration would look like. The episode argues how current norms, created by dogmatic statistical views, in clinical trial analysis can obscure or perhaps mislead from meaningful findings and limit the utility of costly, complex studies.Key HighlightsFAIR-HF2 randomized 1,105 patients with heart failure and iron deficiency to intravenous ferric carboxymaltose or placebo across 70 sites, with three pre-specified co-primary analyses.The study relied on the Hochberg procedure to control family-wise error across analyses: (1) time to first cardiovascular death or heart failure hospitalization; (2) total heart failure hospitalizations; (3) time to first event in a highly iron-deficient subgroup.Results showed a favorable hazard ratio (0.79) and a p-value below 0.05 for primary composite 1, but statistical significance was nullified under Hochberg multiplicity criteria as other endpoints failed threshold requirements.Berry challenges the reduction of trial outcomes to discrete “significant” or “not significant” designations—critiquing the scientific and statistical culture that ignores gradient evidence in favor of only black-and-white outcomes.He details the likelihood principle and Bayesian analysis as superior frameworks, quantifying a 98% posterior probability of benefit; he contextualizes findings with prior evidence from the HEART-FID, IRONMAN, and AFFIRM-AHF trials and published meta-analyses—arguing that isolated, negative conclusions defy cumulative data.The discussion extends to the inefficiency of fixed trial designs, the missed value in adaptive methodologies, and the inefficacy of requiring full-scale repeat trials all analyzed in isolation, when evidence already points strongly to a beneficial effect.


