<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet href="/stylesheet.xsl" type="text/xsl"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:podcast="https://podcastindex.org/namespace/1.0">
  <channel>
    <atom:link rel="self" type="application/rss+xml" href="https://feeds.transistor.fm/in-the-interim" title="MP3 Audio"/>
    <atom:link rel="hub" href="https://pubsubhubbub.appspot.com/"/>
    <podcast:podping usesPodping="true"/>
    <title>In the Interim...</title>
    <generator>Transistor (https://transistor.fm)</generator>
    <itunes:new-feed-url>https://feeds.transistor.fm/in-the-interim</itunes:new-feed-url>
    <description>A podcast on statistical science and clinical trials.

Explore the intricacies of Bayesian statistics and adaptive clinical trials. Uncover methods that push beyond conventional paradigms, ushering in data-driven insights that enhance trial outcomes while ensuring safety and efficacy. Join us as we dive into complex medical challenges and regulatory landscapes, offering innovative solutions tailored for pharma pioneers. Featuring expertise from industry leaders, each episode is crafted to provide clarity, foster debate, and challenge mainstream perspectives, ensuring you remain at the forefront of clinical trial excellence.</description>
    <copyright>© 2025 Berry Consultants</copyright>
    <podcast:guid>6de26b3a-759f-5404-bd2d-f0ebbd0747be</podcast:guid>
    <podcast:locked owner="info@berryconsultants.net">no</podcast:locked>
    <podcast:person role="Host" href="https://www.berryconsultants.com/" img="https://img.transistorcdn.com/D3ZU3jufor08z5PmBAIexD7RKnvumbHpcogC2R-EUbg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82ZmRl/OTVkMzNmNTNkYjQz/MjI2MmZjMzk0Y2I0/NmE2MC5wbmc.jpg">Scott Berry</podcast:person>
    <language>en</language>
    <pubDate>Mon, 04 May 2026 06:00:21 -0500</pubDate>
    <lastBuildDate>Mon, 04 May 2026 06:02:40 -0500</lastBuildDate>
    <link>https://berryconsultants.com</link>
    
    <itunes:category text="Science">
      <itunes:category text="Mathematics"/>
    </itunes:category>
    <itunes:category text="Health &amp; Fitness">
      <itunes:category text="Medicine"/>
    </itunes:category>
    <itunes:type>episodic</itunes:type>
    <itunes:author>Berry</itunes:author>
    <itunes:image href="https://img.transistorcdn.com/4kEBoJ9pC_kLNSr6AX2uD_qdxx8BJ7dMddtvubROpQ0/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9jM2M0/ZDE2YzA1N2FhNjkx/NDk1NDczNjYzM2E5/NjlmYS5wbmc.jpg"/>
    <itunes:summary>A podcast on statistical science and clinical trials.

Explore the intricacies of Bayesian statistics and adaptive clinical trials. Uncover methods that push beyond conventional paradigms, ushering in data-driven insights that enhance trial outcomes while ensuring safety and efficacy. Join us as we dive into complex medical challenges and regulatory landscapes, offering innovative solutions tailored for pharma pioneers. Featuring expertise from industry leaders, each episode is crafted to provide clarity, foster debate, and challenge mainstream perspectives, ensuring you remain at the forefront of clinical trial excellence.</itunes:summary>
    <itunes:subtitle>A podcast on statistical science and clinical trials.</itunes:subtitle>
    <itunes:keywords>statistical science, clinical trials</itunes:keywords>
    <itunes:owner>
      <itunes:name>Berry</itunes:name>
    </itunes:owner>
    <itunes:complete>No</itunes:complete>
    <itunes:explicit>No</itunes:explicit>
    <item>
      <title>AI @ Berry</title>
      <itunes:episode>60</itunes:episode>
      <podcast:episode>60</podcast:episode>
      <itunes:title>AI @ Berry</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">997f8a7d-12a2-42a0-98b2-21a6d288c0b1</guid>
      <link>https://share.transistor.fm/s/7d13bb2d</link>
      <description>
        <![CDATA[<p>In the 60th episode of “In the Interim…”, Dr. Scott Berry, Dr. Nick Berry, and Dr. Joe Marion discuss how Berry Consultants uses AI in clinical trial design and software development. The conversation addresses current applications, limitations, implications for productivity, and the ongoing need for human expertise in clinical trial design. The team examines both promising use cases and the risks associated with security, compliance, and AI-generated statistical work.</p><p><strong>Key Highlights</strong></p><ul><li>AI is used to develop user interfaces and code modules, notably expediting tasks like R Shiny app development and software prototyping.</li><li>Statistical coding for complex modeling and simulation—such as numerical integration and predictive probability calculations—remains unreliable when delegated to AI and still requires direct oversight and manual review.</li><li>Attention to security and confidentiality is central; Berry prohibits the use of client-sensitive or patient data within AI tools.</li><li>Generative AI assists with drafting and editing documents, but the output tends to be non-specific, generic, and sometimes imprecise, requiring expert editorial input before use.</li><li>While embracing AI to improve efficiency, the discussion is critical of current AI hype, especially around black-box modeling and pushes back against the perception that current AI can replace domain-specific statistical design or strategic judgment.</li></ul><p>For more, visit us at <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In the 60th episode of “In the Interim…”, Dr. Scott Berry, Dr. Nick Berry, and Dr. Joe Marion discuss how Berry Consultants uses AI in clinical trial design and software development. The conversation addresses current applications, limitations, implications for productivity, and the ongoing need for human expertise in clinical trial design. The team examines both promising use cases and the risks associated with security, compliance, and AI-generated statistical work.</p><p><strong>Key Highlights</strong></p><ul><li>AI is used to develop user interfaces and code modules, notably expediting tasks like R Shiny app development and software prototyping.</li><li>Statistical coding for complex modeling and simulation—such as numerical integration and predictive probability calculations—remains unreliable when delegated to AI and still requires direct oversight and manual review.</li><li>Attention to security and confidentiality is central; Berry prohibits the use of client-sensitive or patient data within AI tools.</li><li>Generative AI assists with drafting and editing documents, but the output tends to be non-specific, generic, and sometimes imprecise, requiring expert editorial input before use.</li><li>While embracing AI to improve efficiency, the discussion is critical of current AI hype, especially around black-box modeling and pushes back against the perception that current AI can replace domain-specific statistical design or strategic judgment.</li></ul><p>For more, visit us at <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </content:encoded>
      <pubDate>Mon, 04 May 2026 06:00:00 -0500</pubDate>
      <author>Berry</author>
      <enclosure url="https://media.transistor.fm/7d13bb2d/04818d00.mp3" length="49090095" type="audio/mpeg"/>
      <itunes:author>Berry</itunes:author>
      <itunes:duration>3066</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In the 60th episode of “In the Interim…”, Dr. Scott Berry, Dr. Nick Berry, and Dr. Joe Marion discuss how Berry Consultants uses AI in clinical trial design and software development. The conversation addresses current applications, limitations, implications for productivity, and the ongoing need for human expertise in clinical trial design. The team examines both promising use cases and the risks associated with security, compliance, and AI-generated statistical work.</p><p><strong>Key Highlights</strong></p><ul><li>AI is used to develop user interfaces and code modules, notably expediting tasks like R Shiny app development and software prototyping.</li><li>Statistical coding for complex modeling and simulation—such as numerical integration and predictive probability calculations—remains unreliable when delegated to AI and still requires direct oversight and manual review.</li><li>Attention to security and confidentiality is central; Berry prohibits the use of client-sensitive or patient data within AI tools.</li><li>Generative AI assists with drafting and editing documents, but the output tends to be non-specific, generic, and sometimes imprecise, requiring expert editorial input before use.</li><li>While embracing AI to improve efficiency, the discussion is critical of current AI hype, especially around black-box modeling and pushes back against the perception that current AI can replace domain-specific statistical design or strategic judgment.</li></ul><p>For more, visit us at <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </itunes:summary>
      <itunes:keywords>statistical science, clinical trials</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.berryconsultants.com/" img="https://img.transistorcdn.com/D3ZU3jufor08z5PmBAIexD7RKnvumbHpcogC2R-EUbg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82ZmRl/OTVkMzNmNTNkYjQz/MjI2MmZjMzk0Y2I0/NmE2MC5wbmc.jpg">Scott Berry</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/7d13bb2d/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/7d13bb2d/transcript.json" type="application/json"/>
    </item>
    <item>
      <title>Drug Development and Sports: The 10-Run Rule and Futility</title>
      <itunes:episode>59</itunes:episode>
      <podcast:episode>59</podcast:episode>
      <itunes:title>Drug Development and Sports: The 10-Run Rule and Futility</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">c22276cd-722e-4621-84a8-8fcbda77df1d</guid>
      <link>https://share.transistor.fm/s/2bbc6faf</link>
      <description>
        <![CDATA[<p>In this episode of "In the Interim…", Dr. Scott Berry and Dr. Nick Berry investigate how futility in clinical trials and stopping rules in sports illuminate very similar decision problems, albeit with very different consequences. Drawing from baseball’s 10-run rule, tournament cuts in golf, the discussion confronts traditional and Bayesian strategies for interim decisions. The episode explains why simulation, not historical trial review, provides the empirical backbone for futility boundaries in clinical trials, and details the mechanics and consequences of aggressive stopping criteria. Using the Biogen aducanumab Alzheimer’s trials, the conversation exposes how a futility rule based on 20% predictive probability halted trials even when meaningful probability of success remained. Scott and Nick address the influence of ethical considerations, cost, regulatory priorities, and statistical rigor, and contrast Bayesian predictive probability’s strengths over conditional power.</p><p><strong>Key Highlights</strong></p><ul><li>Dissects sports futility rules (10-run rule, golf cuts, Bill James heuristic) and their application to clinical trial design</li><li>Argues for prospective simulation to define adaptive futility thresholds</li><li>Explains how Bayesian predictive probability provides a more robust framework than conditional probability for interim adaptive decisions</li><li>Details how aggressive futility criteria may prematurely stop trials and risk missing beneficial treatments, as in the aducanumab case</li><li>Explores the intersection of ethics, patient safety, operational efficiency, regulatory standards, and trial cost</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of "In the Interim…", Dr. Scott Berry and Dr. Nick Berry investigate how futility in clinical trials and stopping rules in sports illuminate very similar decision problems, albeit with very different consequences. Drawing from baseball’s 10-run rule, tournament cuts in golf, the discussion confronts traditional and Bayesian strategies for interim decisions. The episode explains why simulation, not historical trial review, provides the empirical backbone for futility boundaries in clinical trials, and details the mechanics and consequences of aggressive stopping criteria. Using the Biogen aducanumab Alzheimer’s trials, the conversation exposes how a futility rule based on 20% predictive probability halted trials even when meaningful probability of success remained. Scott and Nick address the influence of ethical considerations, cost, regulatory priorities, and statistical rigor, and contrast Bayesian predictive probability’s strengths over conditional power.</p><p><strong>Key Highlights</strong></p><ul><li>Dissects sports futility rules (10-run rule, golf cuts, Bill James heuristic) and their application to clinical trial design</li><li>Argues for prospective simulation to define adaptive futility thresholds</li><li>Explains how Bayesian predictive probability provides a more robust framework than conditional probability for interim adaptive decisions</li><li>Details how aggressive futility criteria may prematurely stop trials and risk missing beneficial treatments, as in the aducanumab case</li><li>Explores the intersection of ethics, patient safety, operational efficiency, regulatory standards, and trial cost</li></ul>]]>
      </content:encoded>
      <pubDate>Mon, 27 Apr 2026 06:02:22 -0500</pubDate>
      <author>Berry</author>
      <enclosure url="https://media.transistor.fm/2bbc6faf/f7790ece.mp3" length="49944033" type="audio/mpeg"/>
      <itunes:author>Berry</itunes:author>
      <itunes:duration>3119</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode of "In the Interim…", Dr. Scott Berry and Dr. Nick Berry investigate how futility in clinical trials and stopping rules in sports illuminate very similar decision problems, albeit with very different consequences. Drawing from baseball’s 10-run rule, tournament cuts in golf, the discussion confronts traditional and Bayesian strategies for interim decisions. The episode explains why simulation, not historical trial review, provides the empirical backbone for futility boundaries in clinical trials, and details the mechanics and consequences of aggressive stopping criteria. Using the Biogen aducanumab Alzheimer’s trials, the conversation exposes how a futility rule based on 20% predictive probability halted trials even when meaningful probability of success remained. Scott and Nick address the influence of ethical considerations, cost, regulatory priorities, and statistical rigor, and contrast Bayesian predictive probability’s strengths over conditional power.</p><p><strong>Key Highlights</strong></p><ul><li>Dissects sports futility rules (10-run rule, golf cuts, Bill James heuristic) and their application to clinical trial design</li><li>Argues for prospective simulation to define adaptive futility thresholds</li><li>Explains how Bayesian predictive probability provides a more robust framework than conditional probability for interim adaptive decisions</li><li>Details how aggressive futility criteria may prematurely stop trials and risk missing beneficial treatments, as in the aducanumab case</li><li>Explores the intersection of ethics, patient safety, operational efficiency, regulatory standards, and trial cost</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>statistical science, clinical trials</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.berryconsultants.com/" img="https://img.transistorcdn.com/D3ZU3jufor08z5PmBAIexD7RKnvumbHpcogC2R-EUbg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82ZmRl/OTVkMzNmNTNkYjQz/MjI2MmZjMzk0Y2I0/NmE2MC5wbmc.jpg">Scott Berry</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/2bbc6faf/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/2bbc6faf/transcript.json" type="application/json"/>
    </item>
    <item>
      <title>ICH-E20, Regulators, and False Choices</title>
      <itunes:episode>58</itunes:episode>
      <podcast:episode>58</podcast:episode>
      <itunes:title>ICH-E20, Regulators, and False Choices</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">5c304423-7bf2-4f49-9e10-01459935d738</guid>
      <link>https://share.transistor.fm/s/c3a05329</link>
      <description>
        <![CDATA[<p>In this episode of "In the Interim…", host Dr. Scott Berry undertakes a detailed, methodical critique of ICH-E20 draft guidance language as applied to adaptive clinical trial design. Focusing on an innocuous but corruptible paragraph in Section 3.1, Scott scrutinizes the logic behind regulatory reluctance to appreciate multiple or complex adaptations in confirmatory trials. Drawing on extensive experience, he highlights how such restrictive interpretations do not reflect practical development realities, instead setting up “false choices” where alternative designs desired by regulators are infeasible. Through operational scenarios—including the SEPSIS-ACT trial, an enrichment design, and sample size re-estimation examples—Scott illustrates the empirical benefits of seamless and multi-adaptive trials for sponsors, patients, and regulators. Technical discussion addresses misconceptions about complexity and bias and stresses the value of presenting realistic alternatives when engaging with regulatory authorities. The episode ultimately encourages a more nuanced dialogue to advance efficient and scientifically robust clinical trials.</p><p><strong>Key Highlights</strong></p><ul><li>Discussion of ICH-E20 section 3.1 guidance and its operational impact on adaptive designs.</li><li>Dissection of “false choice” dilemmas in regulatory interactions, referencing real adaptive trial submissions.</li><li>Case-based examples: SEPSIS-ACT, enrichment, and sample size adaptation trials.</li><li>Highlighting myths regarding bias and operational burden from multiple interim analyses.</li><li>Emphasis on practical strategies for more effective regulatory communication about adaptive trials and realistic alternatives.</li></ul><p>For more, visit us at <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of "In the Interim…", host Dr. Scott Berry undertakes a detailed, methodical critique of ICH-E20 draft guidance language as applied to adaptive clinical trial design. Focusing on an innocuous but corruptible paragraph in Section 3.1, Scott scrutinizes the logic behind regulatory reluctance to appreciate multiple or complex adaptations in confirmatory trials. Drawing on extensive experience, he highlights how such restrictive interpretations do not reflect practical development realities, instead setting up “false choices” where alternative designs desired by regulators are infeasible. Through operational scenarios—including the SEPSIS-ACT trial, an enrichment design, and sample size re-estimation examples—Scott illustrates the empirical benefits of seamless and multi-adaptive trials for sponsors, patients, and regulators. Technical discussion addresses misconceptions about complexity and bias and stresses the value of presenting realistic alternatives when engaging with regulatory authorities. The episode ultimately encourages a more nuanced dialogue to advance efficient and scientifically robust clinical trials.</p><p><strong>Key Highlights</strong></p><ul><li>Discussion of ICH-E20 section 3.1 guidance and its operational impact on adaptive designs.</li><li>Dissection of “false choice” dilemmas in regulatory interactions, referencing real adaptive trial submissions.</li><li>Case-based examples: SEPSIS-ACT, enrichment, and sample size adaptation trials.</li><li>Highlighting myths regarding bias and operational burden from multiple interim analyses.</li><li>Emphasis on practical strategies for more effective regulatory communication about adaptive trials and realistic alternatives.</li></ul><p>For more, visit us at <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </content:encoded>
      <pubDate>Mon, 20 Apr 2026 06:00:00 -0500</pubDate>
      <author>Berry</author>
      <enclosure url="https://media.transistor.fm/c3a05329/cfbdc6d1.mp3" length="39436102" type="audio/mpeg"/>
      <itunes:author>Berry</itunes:author>
      <itunes:duration>2462</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode of "In the Interim…", host Dr. Scott Berry undertakes a detailed, methodical critique of ICH-E20 draft guidance language as applied to adaptive clinical trial design. Focusing on an innocuous but corruptible paragraph in Section 3.1, Scott scrutinizes the logic behind regulatory reluctance to appreciate multiple or complex adaptations in confirmatory trials. Drawing on extensive experience, he highlights how such restrictive interpretations do not reflect practical development realities, instead setting up “false choices” where alternative designs desired by regulators are infeasible. Through operational scenarios—including the SEPSIS-ACT trial, an enrichment design, and sample size re-estimation examples—Scott illustrates the empirical benefits of seamless and multi-adaptive trials for sponsors, patients, and regulators. Technical discussion addresses misconceptions about complexity and bias and stresses the value of presenting realistic alternatives when engaging with regulatory authorities. The episode ultimately encourages a more nuanced dialogue to advance efficient and scientifically robust clinical trials.</p><p><strong>Key Highlights</strong></p><ul><li>Discussion of ICH-E20 section 3.1 guidance and its operational impact on adaptive designs.</li><li>Dissection of “false choice” dilemmas in regulatory interactions, referencing real adaptive trial submissions.</li><li>Case-based examples: SEPSIS-ACT, enrichment, and sample size adaptation trials.</li><li>Highlighting myths regarding bias and operational burden from multiple interim analyses.</li><li>Emphasis on practical strategies for more effective regulatory communication about adaptive trials and realistic alternatives.</li></ul><p>For more, visit us at <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </itunes:summary>
      <itunes:keywords>statistical science, clinical trials</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.berryconsultants.com/" img="https://img.transistorcdn.com/D3ZU3jufor08z5PmBAIexD7RKnvumbHpcogC2R-EUbg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82ZmRl/OTVkMzNmNTNkYjQz/MjI2MmZjMzk0Y2I0/NmE2MC5wbmc.jpg">Scott Berry</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/c3a05329/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/c3a05329/transcript.json" type="application/json"/>
    </item>
    <item>
      <title>PANTHER: A Phase 2 International Platform Trial in ARDS</title>
      <itunes:episode>57</itunes:episode>
      <podcast:episode>57</podcast:episode>
      <itunes:title>PANTHER: A Phase 2 International Platform Trial in ARDS</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">613bfada-9a44-421e-85f3-3c2b66e4585e</guid>
      <link>https://share.transistor.fm/s/1d8fc549</link>
      <description>
        <![CDATA[<p>In this episode of "In the Interim…" Dr. Scott Berry is joined by Professors Victoria Cornelius, Danny McAuley, and Anthony Gordon, for a technical review of the PANTHER trial—an international, Phase 2 adaptive platform evaluating pharmacologic interventions for ARDS. The trial is open-label and does not employ blinding, as discussed in the episode. The primary endpoint is 28-day organ support-free days (death as -1, survivors 0–28 days), analyzed with a Bayesian proportional odds model. PANTHER uses  stratification by hyper- and hypoinflammatory subphenotypes, with fixed, equal randomization within each stratum. Analyses for treatments are separated by stratum, reflecting the potential of differential treatment effects. Quarterly interim analyses allow early stopping by stratum for efficacy or futility. Content includes explicit discussion of infrastructure: rapid device deployment, centralized data for trial and future biological discovery, and governance challenges in multinational collaboration. Funding is provided by NIHR (UK), US Department of Defense, CIHR (Canada), NHMRC and MRFF (Australia), HRB (Ireland), and additional support from Germany and Japan. PANTHER is positioned to streamline Phase 2 critical care drug testing and facilitate graduation to larger platforms such as REMAP-CAP, with potential to expedite pharmaceutical evaluation and accelerate ARDS therapeutic development.</p><p><strong>Key Highlights</strong></p><ul><li>Real-time phenotyping (Randox device) to stratify ARDS patients.</li><li>Separate Bayesian analyses by phenotype stratum.</li><li>Open-label, fixed randomization within stratum.</li><li>28-day organ support-free days as a composite endpoint.</li><li>Quarterly interim analyses enable early dropping or graduation of arms by strata.</li><li>Central data resource and biosample collection for future research.</li><li>Operational, funding, and device logistics for global trial deployment.</li><li>Transition of Phase 2 results to established Phase 3 platforms (e.g., REMAP-CAP).</li></ul><p>For more, visit us at <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of "In the Interim…" Dr. Scott Berry is joined by Professors Victoria Cornelius, Danny McAuley, and Anthony Gordon, for a technical review of the PANTHER trial—an international, Phase 2 adaptive platform evaluating pharmacologic interventions for ARDS. The trial is open-label and does not employ blinding, as discussed in the episode. The primary endpoint is 28-day organ support-free days (death as -1, survivors 0–28 days), analyzed with a Bayesian proportional odds model. PANTHER uses  stratification by hyper- and hypoinflammatory subphenotypes, with fixed, equal randomization within each stratum. Analyses for treatments are separated by stratum, reflecting the potential of differential treatment effects. Quarterly interim analyses allow early stopping by stratum for efficacy or futility. Content includes explicit discussion of infrastructure: rapid device deployment, centralized data for trial and future biological discovery, and governance challenges in multinational collaboration. Funding is provided by NIHR (UK), US Department of Defense, CIHR (Canada), NHMRC and MRFF (Australia), HRB (Ireland), and additional support from Germany and Japan. PANTHER is positioned to streamline Phase 2 critical care drug testing and facilitate graduation to larger platforms such as REMAP-CAP, with potential to expedite pharmaceutical evaluation and accelerate ARDS therapeutic development.</p><p><strong>Key Highlights</strong></p><ul><li>Real-time phenotyping (Randox device) to stratify ARDS patients.</li><li>Separate Bayesian analyses by phenotype stratum.</li><li>Open-label, fixed randomization within stratum.</li><li>28-day organ support-free days as a composite endpoint.</li><li>Quarterly interim analyses enable early dropping or graduation of arms by strata.</li><li>Central data resource and biosample collection for future research.</li><li>Operational, funding, and device logistics for global trial deployment.</li><li>Transition of Phase 2 results to established Phase 3 platforms (e.g., REMAP-CAP).</li></ul><p>For more, visit us at <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </content:encoded>
      <pubDate>Mon, 13 Apr 2026 06:00:00 -0500</pubDate>
      <author>Berry</author>
      <enclosure url="https://media.transistor.fm/1d8fc549/844f6925.mp3" length="50627394" type="audio/mpeg"/>
      <itunes:author>Berry</itunes:author>
      <itunes:duration>3162</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode of "In the Interim…" Dr. Scott Berry is joined by Professors Victoria Cornelius, Danny McAuley, and Anthony Gordon, for a technical review of the PANTHER trial—an international, Phase 2 adaptive platform evaluating pharmacologic interventions for ARDS. The trial is open-label and does not employ blinding, as discussed in the episode. The primary endpoint is 28-day organ support-free days (death as -1, survivors 0–28 days), analyzed with a Bayesian proportional odds model. PANTHER uses  stratification by hyper- and hypoinflammatory subphenotypes, with fixed, equal randomization within each stratum. Analyses for treatments are separated by stratum, reflecting the potential of differential treatment effects. Quarterly interim analyses allow early stopping by stratum for efficacy or futility. Content includes explicit discussion of infrastructure: rapid device deployment, centralized data for trial and future biological discovery, and governance challenges in multinational collaboration. Funding is provided by NIHR (UK), US Department of Defense, CIHR (Canada), NHMRC and MRFF (Australia), HRB (Ireland), and additional support from Germany and Japan. PANTHER is positioned to streamline Phase 2 critical care drug testing and facilitate graduation to larger platforms such as REMAP-CAP, with potential to expedite pharmaceutical evaluation and accelerate ARDS therapeutic development.</p><p><strong>Key Highlights</strong></p><ul><li>Real-time phenotyping (Randox device) to stratify ARDS patients.</li><li>Separate Bayesian analyses by phenotype stratum.</li><li>Open-label, fixed randomization within stratum.</li><li>28-day organ support-free days as a composite endpoint.</li><li>Quarterly interim analyses enable early dropping or graduation of arms by strata.</li><li>Central data resource and biosample collection for future research.</li><li>Operational, funding, and device logistics for global trial deployment.</li><li>Transition of Phase 2 results to established Phase 3 platforms (e.g., REMAP-CAP).</li></ul><p>For more, visit us at <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </itunes:summary>
      <itunes:keywords>statistical science, clinical trials</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.berryconsultants.com/" img="https://img.transistorcdn.com/D3ZU3jufor08z5PmBAIexD7RKnvumbHpcogC2R-EUbg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82ZmRl/OTVkMzNmNTNkYjQz/MjI2MmZjMzk0Y2I0/NmE2MC5wbmc.jpg">Scott Berry</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/1d8fc549/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/1d8fc549/transcript.json" type="application/json"/>
    </item>
    <item>
      <title>A Visit with Byron Gajewski: KUMC, Innovative Trial Designs, the HOBIT Trial</title>
      <itunes:episode>56</itunes:episode>
      <podcast:episode>56</podcast:episode>
      <itunes:title>A Visit with Byron Gajewski: KUMC, Innovative Trial Designs, the HOBIT Trial</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">74ebf6e2-f922-4725-9ca5-a0a9b220711a</guid>
      <link>https://share.transistor.fm/s/e0d93ef4</link>
      <description>
        <![CDATA[<p>In this episode of "In the Interim…", Dr. Scott Berry connects with Dr. Byron Gajewski, professor of biostatistics and data science at the University of Kansas Medical Center (KUMC), for a detailed discussion on the design, simulation, and operational realities of Bayesian adaptive clinical trials in academic environments. Gajewski discusses his academic background, training at Texas A&amp;M, and progressive adoption of Bayesian principles based on direct experiential advantages in complex data settings. The conversation highlights KUMC’s Fixed and Adaptive Clinical Trial Simulator Working Group, which utilizes FACTS for faculty, staff, and student collaboration, enabling practical simulation, trial protocol development, and in-house applied statistical training. The PAIN-CONTRoLS Trial serves as a practical example of multi-arm Bayesian adaptive design, using response-adaptive randomization for comparative effectiveness in neuropathy research. The NIH-funded HOBIT trial is examined in detail: multi-arm structure, adaptive allocation among investigational arms, fixed control randomization, group-sequential interim analyses, and sliding dichotomy methodology for the Glasgow Outcome Scale Extended. The discussion stresses a shift to probabilistic, evidence-driven interpretation and reporting, shaping operational choices and academic culture for both investigators and trainees.</p><p><strong>Key Highlights</strong></p><ul><li>Gajewski describes how practical challenges in real-world problems catalyzed his transition to Bayesian modeling.</li><li>KUMC’s working group integrates FACTS software in collaborative simulation and operational trial planning.</li><li>The PAIN-CONTRoLS Trial: multi-arm Bayesian adaptive design, response-adaptive randomization, real-time analysis, and endpoint-driven allocation.</li><li>HOBIT trial: Adaptive allocation, fixed control arm proportion, group-sequential interims, ordinal endpoint modeling.</li><li>Emphasis on probabilistic, quantitative reporting over binary outcomes in trial analysis and interpretation.</li><li>Cultural shift observed among academic collaborators and trainees embracing Bayesian adaptive strategies.</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of "In the Interim…", Dr. Scott Berry connects with Dr. Byron Gajewski, professor of biostatistics and data science at the University of Kansas Medical Center (KUMC), for a detailed discussion on the design, simulation, and operational realities of Bayesian adaptive clinical trials in academic environments. Gajewski discusses his academic background, training at Texas A&amp;M, and progressive adoption of Bayesian principles based on direct experiential advantages in complex data settings. The conversation highlights KUMC’s Fixed and Adaptive Clinical Trial Simulator Working Group, which utilizes FACTS for faculty, staff, and student collaboration, enabling practical simulation, trial protocol development, and in-house applied statistical training. The PAIN-CONTRoLS Trial serves as a practical example of multi-arm Bayesian adaptive design, using response-adaptive randomization for comparative effectiveness in neuropathy research. The NIH-funded HOBIT trial is examined in detail: multi-arm structure, adaptive allocation among investigational arms, fixed control randomization, group-sequential interim analyses, and sliding dichotomy methodology for the Glasgow Outcome Scale Extended. The discussion stresses a shift to probabilistic, evidence-driven interpretation and reporting, shaping operational choices and academic culture for both investigators and trainees.</p><p><strong>Key Highlights</strong></p><ul><li>Gajewski describes how practical challenges in real-world problems catalyzed his transition to Bayesian modeling.</li><li>KUMC’s working group integrates FACTS software in collaborative simulation and operational trial planning.</li><li>The PAIN-CONTRoLS Trial: multi-arm Bayesian adaptive design, response-adaptive randomization, real-time analysis, and endpoint-driven allocation.</li><li>HOBIT trial: Adaptive allocation, fixed control arm proportion, group-sequential interims, ordinal endpoint modeling.</li><li>Emphasis on probabilistic, quantitative reporting over binary outcomes in trial analysis and interpretation.</li><li>Cultural shift observed among academic collaborators and trainees embracing Bayesian adaptive strategies.</li></ul>]]>
      </content:encoded>
      <pubDate>Mon, 06 Apr 2026 06:24:37 -0500</pubDate>
      <author>Berry</author>
      <enclosure url="https://media.transistor.fm/e0d93ef4/98ef7075.mp3" length="38811291" type="audio/mpeg"/>
      <itunes:author>Berry</itunes:author>
      <itunes:duration>2423</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode of "In the Interim…", Dr. Scott Berry connects with Dr. Byron Gajewski, professor of biostatistics and data science at the University of Kansas Medical Center (KUMC), for a detailed discussion on the design, simulation, and operational realities of Bayesian adaptive clinical trials in academic environments. Gajewski discusses his academic background, training at Texas A&amp;M, and progressive adoption of Bayesian principles based on direct experiential advantages in complex data settings. The conversation highlights KUMC’s Fixed and Adaptive Clinical Trial Simulator Working Group, which utilizes FACTS for faculty, staff, and student collaboration, enabling practical simulation, trial protocol development, and in-house applied statistical training. The PAIN-CONTRoLS Trial serves as a practical example of multi-arm Bayesian adaptive design, using response-adaptive randomization for comparative effectiveness in neuropathy research. The NIH-funded HOBIT trial is examined in detail: multi-arm structure, adaptive allocation among investigational arms, fixed control randomization, group-sequential interim analyses, and sliding dichotomy methodology for the Glasgow Outcome Scale Extended. The discussion stresses a shift to probabilistic, evidence-driven interpretation and reporting, shaping operational choices and academic culture for both investigators and trainees.</p><p><strong>Key Highlights</strong></p><ul><li>Gajewski describes how practical challenges in real-world problems catalyzed his transition to Bayesian modeling.</li><li>KUMC’s working group integrates FACTS software in collaborative simulation and operational trial planning.</li><li>The PAIN-CONTRoLS Trial: multi-arm Bayesian adaptive design, response-adaptive randomization, real-time analysis, and endpoint-driven allocation.</li><li>HOBIT trial: Adaptive allocation, fixed control arm proportion, group-sequential interims, ordinal endpoint modeling.</li><li>Emphasis on probabilistic, quantitative reporting over binary outcomes in trial analysis and interpretation.</li><li>Cultural shift observed among academic collaborators and trainees embracing Bayesian adaptive strategies.</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>statistical science, clinical trials</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.berryconsultants.com/" img="https://img.transistorcdn.com/D3ZU3jufor08z5PmBAIexD7RKnvumbHpcogC2R-EUbg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82ZmRl/OTVkMzNmNTNkYjQz/MjI2MmZjMzk0Y2I0/NmE2MC5wbmc.jpg">Scott Berry</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/e0d93ef4/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/e0d93ef4/transcript.json" type="application/json"/>
    </item>
    <item>
      <title>A Visit with Stephen Senn: Time, Concurrent Controls, and the Bayesian Guidance</title>
      <itunes:episode>55</itunes:episode>
      <podcast:episode>55</podcast:episode>
      <itunes:title>A Visit with Stephen Senn: Time, Concurrent Controls, and the Bayesian Guidance</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">59a3f8e7-5c64-4ceb-8134-3143aab52085</guid>
      <link>https://share.transistor.fm/s/99fe3d92</link>
      <description>
        <![CDATA[<p>In this episode of "In the Interim...", Dr. Scott Berry hosts Dr. Stephen Senn, award-winning statistician and author, for a discussion on advanced challenges in adaptive and platform trial methodology. Senn draws on experience in academic, pharmaceutical, and regulatory settings to address the recent draft guidance on Bayesian statistics from the FDA and multiple controversies in clinical trial design.</p><p><strong>Key Highlights</strong></p><ul><li>Emphasizes understanding data origin and regression to the mean as essential for trial interpretation, above adherence to Bayesian or frequentist frameworks.</li><li>Details methodological considerations for time adjustments and model complexity, highlighting that model specification and parameter handling are critical regardless of statistical school.</li><li>Identifies the limitations of non-concurrent controls in platform trials, focusing on evolving background therapy, site participation, and protocol changes that reduce validity of historical or pooled control data.</li><li>Analyzes blinding difficulties in trials with multiple treatments and administration modes, using “veiled” blinding as a case and noting the implications for placebo response comparability.</li><li>Clarifies that operational efficiencies are the principal advantage of adaptive and platform trials, while purported statistical efficiencies can be exaggerated.</li><li>Stresses the importance of presenting interim analyses transparently to DSMBs when using complex models for time or covariate adjustment, to ensure oversight and interpretation remain rigorous.</li></ul><p>For more, visit us at <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of "In the Interim...", Dr. Scott Berry hosts Dr. Stephen Senn, award-winning statistician and author, for a discussion on advanced challenges in adaptive and platform trial methodology. Senn draws on experience in academic, pharmaceutical, and regulatory settings to address the recent draft guidance on Bayesian statistics from the FDA and multiple controversies in clinical trial design.</p><p><strong>Key Highlights</strong></p><ul><li>Emphasizes understanding data origin and regression to the mean as essential for trial interpretation, above adherence to Bayesian or frequentist frameworks.</li><li>Details methodological considerations for time adjustments and model complexity, highlighting that model specification and parameter handling are critical regardless of statistical school.</li><li>Identifies the limitations of non-concurrent controls in platform trials, focusing on evolving background therapy, site participation, and protocol changes that reduce validity of historical or pooled control data.</li><li>Analyzes blinding difficulties in trials with multiple treatments and administration modes, using “veiled” blinding as a case and noting the implications for placebo response comparability.</li><li>Clarifies that operational efficiencies are the principal advantage of adaptive and platform trials, while purported statistical efficiencies can be exaggerated.</li><li>Stresses the importance of presenting interim analyses transparently to DSMBs when using complex models for time or covariate adjustment, to ensure oversight and interpretation remain rigorous.</li></ul><p>For more, visit us at <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </content:encoded>
      <pubDate>Mon, 30 Mar 2026 06:00:00 -0500</pubDate>
      <author>Berry</author>
      <enclosure url="https://media.transistor.fm/99fe3d92/fd82a5ca.mp3" length="45538846" type="audio/mpeg"/>
      <itunes:author>Berry</itunes:author>
      <itunes:duration>2844</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode of "In the Interim...", Dr. Scott Berry hosts Dr. Stephen Senn, award-winning statistician and author, for a discussion on advanced challenges in adaptive and platform trial methodology. Senn draws on experience in academic, pharmaceutical, and regulatory settings to address the recent draft guidance on Bayesian statistics from the FDA and multiple controversies in clinical trial design.</p><p><strong>Key Highlights</strong></p><ul><li>Emphasizes understanding data origin and regression to the mean as essential for trial interpretation, above adherence to Bayesian or frequentist frameworks.</li><li>Details methodological considerations for time adjustments and model complexity, highlighting that model specification and parameter handling are critical regardless of statistical school.</li><li>Identifies the limitations of non-concurrent controls in platform trials, focusing on evolving background therapy, site participation, and protocol changes that reduce validity of historical or pooled control data.</li><li>Analyzes blinding difficulties in trials with multiple treatments and administration modes, using “veiled” blinding as a case and noting the implications for placebo response comparability.</li><li>Clarifies that operational efficiencies are the principal advantage of adaptive and platform trials, while purported statistical efficiencies can be exaggerated.</li><li>Stresses the importance of presenting interim analyses transparently to DSMBs when using complex models for time or covariate adjustment, to ensure oversight and interpretation remain rigorous.</li></ul><p>For more, visit us at <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </itunes:summary>
      <itunes:keywords>statistical science, clinical trials</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.berryconsultants.com/" img="https://img.transistorcdn.com/D3ZU3jufor08z5PmBAIexD7RKnvumbHpcogC2R-EUbg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82ZmRl/OTVkMzNmNTNkYjQz/MjI2MmZjMzk0Y2I0/NmE2MC5wbmc.jpg">Scott Berry</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/99fe3d92/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/99fe3d92/transcript.json" type="application/json"/>
    </item>
    <item>
      <title>Making Sense of Hierarchical Composites</title>
      <itunes:episode>54</itunes:episode>
      <podcast:episode>54</podcast:episode>
      <itunes:title>Making Sense of Hierarchical Composites</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">ee271764-d987-412d-949e-b100493eec61</guid>
      <link>https://share.transistor.fm/s/dcbcb685</link>
      <description>
        <![CDATA[<p>In this episode of "In the Interim…", Dr. Scott Berry is joined by statisticians Dr. Amy Crawford, Dr. Cora Allen-Savietta, and Dr. Jessica Overbey for a technical deep dive into hierarchical composite endpoints and the win ratio in clinical trial design. The group addresses clinical and statistical justifications for layered endpoint structures, demonstrates the mechanics of pairwise win ratio analysis, and explores operational and interpretive consequences in both conventional and adaptive trials. The panel scrutinizes analytic limitations, regulatory concerns, and emerging modeling strategies—all grounded in real-world trial examples.</p><p><strong>Key Highlights</strong></p><ul><li>Precise definition and use case for hierarchical composite endpoints in cardiovascular and related trials.</li><li>Stepwise breakdown of win ratio mechanics, tie-handling, and the distinction between effect estimation (win ratio) and hypothesis testing (FS-test).</li><li>Discussion of endpoint prevalence and dominance, risk of clinical interpretation being tied to lower-order outcomes, the role of patient exposure, and methods to parse component contributions.</li><li>Overview of statistical power, role of simulation, and comparative advantages over other composite approaches.</li><li>Identification of core limitations: interpretive complexity, opaque weighting, and mutable meaning of wins with maturing data.</li><li>Review of predictive probability for adaptive interim analysis and modeling using ordinal regression.</li><li>Opinions of US and European regulatory perspectives including support, reservations, and expectations for transparency with graphics and complementary analyses.</li></ul><p>For more, visit us at <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of "In the Interim…", Dr. Scott Berry is joined by statisticians Dr. Amy Crawford, Dr. Cora Allen-Savietta, and Dr. Jessica Overbey for a technical deep dive into hierarchical composite endpoints and the win ratio in clinical trial design. The group addresses clinical and statistical justifications for layered endpoint structures, demonstrates the mechanics of pairwise win ratio analysis, and explores operational and interpretive consequences in both conventional and adaptive trials. The panel scrutinizes analytic limitations, regulatory concerns, and emerging modeling strategies—all grounded in real-world trial examples.</p><p><strong>Key Highlights</strong></p><ul><li>Precise definition and use case for hierarchical composite endpoints in cardiovascular and related trials.</li><li>Stepwise breakdown of win ratio mechanics, tie-handling, and the distinction between effect estimation (win ratio) and hypothesis testing (FS-test).</li><li>Discussion of endpoint prevalence and dominance, risk of clinical interpretation being tied to lower-order outcomes, the role of patient exposure, and methods to parse component contributions.</li><li>Overview of statistical power, role of simulation, and comparative advantages over other composite approaches.</li><li>Identification of core limitations: interpretive complexity, opaque weighting, and mutable meaning of wins with maturing data.</li><li>Review of predictive probability for adaptive interim analysis and modeling using ordinal regression.</li><li>Opinions of US and European regulatory perspectives including support, reservations, and expectations for transparency with graphics and complementary analyses.</li></ul><p>For more, visit us at <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </content:encoded>
      <pubDate>Mon, 23 Mar 2026 06:00:00 -0500</pubDate>
      <author>Berry</author>
      <enclosure url="https://media.transistor.fm/dcbcb685/a11a5a5f.mp3" length="51435711" type="audio/mpeg"/>
      <itunes:author>Berry</itunes:author>
      <itunes:duration>3212</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode of "In the Interim…", Dr. Scott Berry is joined by statisticians Dr. Amy Crawford, Dr. Cora Allen-Savietta, and Dr. Jessica Overbey for a technical deep dive into hierarchical composite endpoints and the win ratio in clinical trial design. The group addresses clinical and statistical justifications for layered endpoint structures, demonstrates the mechanics of pairwise win ratio analysis, and explores operational and interpretive consequences in both conventional and adaptive trials. The panel scrutinizes analytic limitations, regulatory concerns, and emerging modeling strategies—all grounded in real-world trial examples.</p><p><strong>Key Highlights</strong></p><ul><li>Precise definition and use case for hierarchical composite endpoints in cardiovascular and related trials.</li><li>Stepwise breakdown of win ratio mechanics, tie-handling, and the distinction between effect estimation (win ratio) and hypothesis testing (FS-test).</li><li>Discussion of endpoint prevalence and dominance, risk of clinical interpretation being tied to lower-order outcomes, the role of patient exposure, and methods to parse component contributions.</li><li>Overview of statistical power, role of simulation, and comparative advantages over other composite approaches.</li><li>Identification of core limitations: interpretive complexity, opaque weighting, and mutable meaning of wins with maturing data.</li><li>Review of predictive probability for adaptive interim analysis and modeling using ordinal regression.</li><li>Opinions of US and European regulatory perspectives including support, reservations, and expectations for transparency with graphics and complementary analyses.</li></ul><p>For more, visit us at <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </itunes:summary>
      <itunes:keywords>statistical science, clinical trials</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.berryconsultants.com/" img="https://img.transistorcdn.com/D3ZU3jufor08z5PmBAIexD7RKnvumbHpcogC2R-EUbg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82ZmRl/OTVkMzNmNTNkYjQz/MjI2MmZjMzk0Y2I0/NmE2MC5wbmc.jpg">Scott Berry</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/dcbcb685/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/dcbcb685/transcript.json" type="application/json"/>
    </item>
    <item>
      <title>The SNAP Trial with Professors Tong and Davis</title>
      <itunes:episode>53</itunes:episode>
      <podcast:episode>53</podcast:episode>
      <itunes:title>The SNAP Trial with Professors Tong and Davis</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">0e2412b4-fc3d-40d1-b920-9bbb19e71ed1</guid>
      <link>https://share.transistor.fm/s/3fb026e2</link>
      <description>
        <![CDATA[<p>In this episode of "In the Interim…", Dr. Scott Berry interviews Professors Steven Tong and Josh Davis about the SNAP platform trial for Staphylococcus aureus bacteremia. The discussion covers SNAP’s rationale, large-scale adaptive design, methodology, and operational execution at approximately 150 hospitals in 13 countries. Key statistical questions, domain results, pediatric-adult analysis, and global implementation strategy are explored in depth. Listeners will find clear examples of how adaptive platform trials can efficiently address clinically relevant questions in infectious disease, while highlighting the nuances of trial design, statistical thresholds, and network collaboration.</p><p><strong>Key Highlights</strong></p><ul><li>High and unchanging mortality for Staphylococcus aureus bacteremia—over one million deaths annually.</li><li>SNAP leverages silo-based structure (MSSA, MRSA, PSSA) and factorial domains for simultaneous, efficient investigation of treatments.</li><li>Cefazolin shown non-inferior to flucloxacillin for MSSA with lower related acute kidney injury.</li><li>In PSSA, penicillin demonstrated significantly less toxicity and favorable mortality signal over flucloxacillin; mortality difference did not meet the statistical superiority threshold.</li><li>Futility reached in the adjunctive clindamycin domain for effect on 90-day mortality.</li><li>Both adults and children enrolled, with pediatric results using statistical borrowing from adults in line with FDA Bayesian guidance.</li><li>Ongoing platform expansion includes bacteriophage therapy, antiplatelet domains, and evaluation of diagnostic strategies.</li><li>Statistical leadership: Dr. Anna McGlothlin (Berry Consultants), Dr. Julie Marsh (statistics lead).</li></ul><p>For more, visit us at <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of "In the Interim…", Dr. Scott Berry interviews Professors Steven Tong and Josh Davis about the SNAP platform trial for Staphylococcus aureus bacteremia. The discussion covers SNAP’s rationale, large-scale adaptive design, methodology, and operational execution at approximately 150 hospitals in 13 countries. Key statistical questions, domain results, pediatric-adult analysis, and global implementation strategy are explored in depth. Listeners will find clear examples of how adaptive platform trials can efficiently address clinically relevant questions in infectious disease, while highlighting the nuances of trial design, statistical thresholds, and network collaboration.</p><p><strong>Key Highlights</strong></p><ul><li>High and unchanging mortality for Staphylococcus aureus bacteremia—over one million deaths annually.</li><li>SNAP leverages silo-based structure (MSSA, MRSA, PSSA) and factorial domains for simultaneous, efficient investigation of treatments.</li><li>Cefazolin shown non-inferior to flucloxacillin for MSSA with lower related acute kidney injury.</li><li>In PSSA, penicillin demonstrated significantly less toxicity and favorable mortality signal over flucloxacillin; mortality difference did not meet the statistical superiority threshold.</li><li>Futility reached in the adjunctive clindamycin domain for effect on 90-day mortality.</li><li>Both adults and children enrolled, with pediatric results using statistical borrowing from adults in line with FDA Bayesian guidance.</li><li>Ongoing platform expansion includes bacteriophage therapy, antiplatelet domains, and evaluation of diagnostic strategies.</li><li>Statistical leadership: Dr. Anna McGlothlin (Berry Consultants), Dr. Julie Marsh (statistics lead).</li></ul><p>For more, visit us at <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </content:encoded>
      <pubDate>Mon, 16 Mar 2026 06:30:00 -0500</pubDate>
      <author>Berry</author>
      <enclosure url="https://media.transistor.fm/3fb026e2/2ac648d1.mp3" length="51772592" type="audio/mpeg"/>
      <itunes:author>Berry</itunes:author>
      <itunes:duration>3233</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode of "In the Interim…", Dr. Scott Berry interviews Professors Steven Tong and Josh Davis about the SNAP platform trial for Staphylococcus aureus bacteremia. The discussion covers SNAP’s rationale, large-scale adaptive design, methodology, and operational execution at approximately 150 hospitals in 13 countries. Key statistical questions, domain results, pediatric-adult analysis, and global implementation strategy are explored in depth. Listeners will find clear examples of how adaptive platform trials can efficiently address clinically relevant questions in infectious disease, while highlighting the nuances of trial design, statistical thresholds, and network collaboration.</p><p><strong>Key Highlights</strong></p><ul><li>High and unchanging mortality for Staphylococcus aureus bacteremia—over one million deaths annually.</li><li>SNAP leverages silo-based structure (MSSA, MRSA, PSSA) and factorial domains for simultaneous, efficient investigation of treatments.</li><li>Cefazolin shown non-inferior to flucloxacillin for MSSA with lower related acute kidney injury.</li><li>In PSSA, penicillin demonstrated significantly less toxicity and favorable mortality signal over flucloxacillin; mortality difference did not meet the statistical superiority threshold.</li><li>Futility reached in the adjunctive clindamycin domain for effect on 90-day mortality.</li><li>Both adults and children enrolled, with pediatric results using statistical borrowing from adults in line with FDA Bayesian guidance.</li><li>Ongoing platform expansion includes bacteriophage therapy, antiplatelet domains, and evaluation of diagnostic strategies.</li><li>Statistical leadership: Dr. Anna McGlothlin (Berry Consultants), Dr. Julie Marsh (statistics lead).</li></ul><p>For more, visit us at <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </itunes:summary>
      <itunes:keywords>statistical science, clinical trials</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.berryconsultants.com/" img="https://img.transistorcdn.com/D3ZU3jufor08z5PmBAIexD7RKnvumbHpcogC2R-EUbg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82ZmRl/OTVkMzNmNTNkYjQz/MjI2MmZjMzk0Y2I0/NmE2MC5wbmc.jpg">Scott Berry</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/3fb026e2/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/3fb026e2/transcript.json" type="application/json"/>
    </item>
    <item>
      <title>Bayesian Borrowing in Phase 3 Trials</title>
      <itunes:episode>52</itunes:episode>
      <podcast:episode>52</podcast:episode>
      <itunes:title>Bayesian Borrowing in Phase 3 Trials</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">7cf057ba-e908-42f7-886d-665c9ab2553b</guid>
      <link>https://share.transistor.fm/s/d36c226a</link>
      <description>
        <![CDATA[<p>In this episode of "In the Interim…", Dr. Scott Berry and Dr. Kert Viele examine Bayesian borrowing in Phase 3 clinical trials, focusing on statistical handling of prior information and real-world FDA interactions. The episode opens with an analogy, comparing prior probability in Bayesian analysis to interpreting a home pregnancy test, succinctly demonstrating the effect of prior knowledge on trial interpretation. The discussion addresses technical challenges—how borrowing inflates Type I errors and why this is addressed differently under Bayesian operating characteristics. Concrete examples include dynamic versus static borrowing approaches, and formal integration of prior evidence in regulatory submissions. Case studies center on the WATCHMAN device (PROTECT AF and PREVAIL trials) and REBYOTA, illustrating FDA engagement, relevant trial design tactics, and published outcomes. The episode also critiques common pitfalls such as selective data use and improper prior construction, emphasizing the FDA’s focus on comprehensive and unbiased historical sources.</p><p><strong>Key Highlights</strong></p><ul><li>Pregnancy test analogy used to clarify prior probability in trial interpretation.</li><li>Bayesian borrowing’s effects on Type I error and statistical thresholds.</li><li>Case studies: WATCHMAN device (PROTECT AF, PREVAIL) and REBYOTA approvals.</li><li>Dynamic borrowing versus static borrowing strategies in regulatory settings.</li><li>Risks of cherry-picking and importance of unbiased, relevant prior data.</li><li>FDA guidance and review procedures for Bayesian trials.</li></ul><p>For more, visit us at <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of "In the Interim…", Dr. Scott Berry and Dr. Kert Viele examine Bayesian borrowing in Phase 3 clinical trials, focusing on statistical handling of prior information and real-world FDA interactions. The episode opens with an analogy, comparing prior probability in Bayesian analysis to interpreting a home pregnancy test, succinctly demonstrating the effect of prior knowledge on trial interpretation. The discussion addresses technical challenges—how borrowing inflates Type I errors and why this is addressed differently under Bayesian operating characteristics. Concrete examples include dynamic versus static borrowing approaches, and formal integration of prior evidence in regulatory submissions. Case studies center on the WATCHMAN device (PROTECT AF and PREVAIL trials) and REBYOTA, illustrating FDA engagement, relevant trial design tactics, and published outcomes. The episode also critiques common pitfalls such as selective data use and improper prior construction, emphasizing the FDA’s focus on comprehensive and unbiased historical sources.</p><p><strong>Key Highlights</strong></p><ul><li>Pregnancy test analogy used to clarify prior probability in trial interpretation.</li><li>Bayesian borrowing’s effects on Type I error and statistical thresholds.</li><li>Case studies: WATCHMAN device (PROTECT AF, PREVAIL) and REBYOTA approvals.</li><li>Dynamic borrowing versus static borrowing strategies in regulatory settings.</li><li>Risks of cherry-picking and importance of unbiased, relevant prior data.</li><li>FDA guidance and review procedures for Bayesian trials.</li></ul><p>For more, visit us at <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </content:encoded>
      <pubDate>Mon, 09 Mar 2026 05:15:00 -0500</pubDate>
      <author>Berry</author>
      <enclosure url="https://media.transistor.fm/d36c226a/dd83580d.mp3" length="44798098" type="audio/mpeg"/>
      <itunes:author>Berry</itunes:author>
      <itunes:duration>2798</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode of "In the Interim…", Dr. Scott Berry and Dr. Kert Viele examine Bayesian borrowing in Phase 3 clinical trials, focusing on statistical handling of prior information and real-world FDA interactions. The episode opens with an analogy, comparing prior probability in Bayesian analysis to interpreting a home pregnancy test, succinctly demonstrating the effect of prior knowledge on trial interpretation. The discussion addresses technical challenges—how borrowing inflates Type I errors and why this is addressed differently under Bayesian operating characteristics. Concrete examples include dynamic versus static borrowing approaches, and formal integration of prior evidence in regulatory submissions. Case studies center on the WATCHMAN device (PROTECT AF and PREVAIL trials) and REBYOTA, illustrating FDA engagement, relevant trial design tactics, and published outcomes. The episode also critiques common pitfalls such as selective data use and improper prior construction, emphasizing the FDA’s focus on comprehensive and unbiased historical sources.</p><p><strong>Key Highlights</strong></p><ul><li>Pregnancy test analogy used to clarify prior probability in trial interpretation.</li><li>Bayesian borrowing’s effects on Type I error and statistical thresholds.</li><li>Case studies: WATCHMAN device (PROTECT AF, PREVAIL) and REBYOTA approvals.</li><li>Dynamic borrowing versus static borrowing strategies in regulatory settings.</li><li>Risks of cherry-picking and importance of unbiased, relevant prior data.</li><li>FDA guidance and review procedures for Bayesian trials.</li></ul><p>For more, visit us at <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </itunes:summary>
      <itunes:keywords>statistical science, clinical trials</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.berryconsultants.com/" img="https://img.transistorcdn.com/D3ZU3jufor08z5PmBAIexD7RKnvumbHpcogC2R-EUbg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82ZmRl/OTVkMzNmNTNkYjQz/MjI2MmZjMzk0Y2I0/NmE2MC5wbmc.jpg">Scott Berry</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/d36c226a/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/d36c226a/transcript.json" type="application/json"/>
    </item>
    <item>
      <title>The Art of Storytelling with Shaun Cassidy</title>
      <itunes:episode>51</itunes:episode>
      <podcast:episode>51</podcast:episode>
      <itunes:title>The Art of Storytelling with Shaun Cassidy</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">7d6682f9-766d-4e85-8a64-96463f1cab02</guid>
      <link>https://share.transistor.fm/s/57abc83d</link>
      <description>
        <![CDATA[<p>In Episode 51 of "In the Interim…", Dr. Scott Berry interviews writer, producer, and performer Shaun Cassidy to examine the practical elements of storytelling that matter in scientific and statistical communication. Cassidy draws on his experience in television, music, and live performance—including his role as writer and Executive Producer of New Amsterdam—to present clear parallels between audience engagement in show business and in clinical research. The conversation prioritizes improving narrative precision, emotional resonance, and authenticity when conveying complex topics to varied audiences.</p><p><strong>Key Highlights</strong></p><ul><li>Cassidy demonstrates that audiences retain emotional impact over factual content, asserting that “people don’t remember what you say, but how you made them feel.”</li><li>Emphasis on narrative specificity: personal, concrete details foster stronger audience connection than generalized statements, countering assumptions about broad relatability.</li><li>Effective communication relies on reactive delivery—improvised response to audience cues—rather than rigid, memorized scripts; Cassidy notes this principle applies across disciplines.</li><li>Role of authenticity and vulnerability: openly stating discomfort or introversion facilitates greater audience trust and personal connection, especially in technical or scientific fields.</li><li>Anecdotes from Cassidy’s work in television, music, and teaching illustrate the central role of storytelling structure and audience feedback, with parallels drawn to professional scientific presentations.</li><li>Alan Alda’s illustration of improv for scientists is discussed as an example of bridging technical expertise with adaptive communication skills.</li></ul><p>For more, visit us at <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In Episode 51 of "In the Interim…", Dr. Scott Berry interviews writer, producer, and performer Shaun Cassidy to examine the practical elements of storytelling that matter in scientific and statistical communication. Cassidy draws on his experience in television, music, and live performance—including his role as writer and Executive Producer of New Amsterdam—to present clear parallels between audience engagement in show business and in clinical research. The conversation prioritizes improving narrative precision, emotional resonance, and authenticity when conveying complex topics to varied audiences.</p><p><strong>Key Highlights</strong></p><ul><li>Cassidy demonstrates that audiences retain emotional impact over factual content, asserting that “people don’t remember what you say, but how you made them feel.”</li><li>Emphasis on narrative specificity: personal, concrete details foster stronger audience connection than generalized statements, countering assumptions about broad relatability.</li><li>Effective communication relies on reactive delivery—improvised response to audience cues—rather than rigid, memorized scripts; Cassidy notes this principle applies across disciplines.</li><li>Role of authenticity and vulnerability: openly stating discomfort or introversion facilitates greater audience trust and personal connection, especially in technical or scientific fields.</li><li>Anecdotes from Cassidy’s work in television, music, and teaching illustrate the central role of storytelling structure and audience feedback, with parallels drawn to professional scientific presentations.</li><li>Alan Alda’s illustration of improv for scientists is discussed as an example of bridging technical expertise with adaptive communication skills.</li></ul><p>For more, visit us at <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </content:encoded>
      <pubDate>Mon, 02 Mar 2026 06:00:00 -0600</pubDate>
      <author>Berry</author>
      <enclosure url="https://media.transistor.fm/57abc83d/d6360650.mp3" length="50308478" type="audio/mpeg"/>
      <itunes:author>Berry</itunes:author>
      <itunes:duration>3142</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In Episode 51 of "In the Interim…", Dr. Scott Berry interviews writer, producer, and performer Shaun Cassidy to examine the practical elements of storytelling that matter in scientific and statistical communication. Cassidy draws on his experience in television, music, and live performance—including his role as writer and Executive Producer of New Amsterdam—to present clear parallels between audience engagement in show business and in clinical research. The conversation prioritizes improving narrative precision, emotional resonance, and authenticity when conveying complex topics to varied audiences.</p><p><strong>Key Highlights</strong></p><ul><li>Cassidy demonstrates that audiences retain emotional impact over factual content, asserting that “people don’t remember what you say, but how you made them feel.”</li><li>Emphasis on narrative specificity: personal, concrete details foster stronger audience connection than generalized statements, countering assumptions about broad relatability.</li><li>Effective communication relies on reactive delivery—improvised response to audience cues—rather than rigid, memorized scripts; Cassidy notes this principle applies across disciplines.</li><li>Role of authenticity and vulnerability: openly stating discomfort or introversion facilitates greater audience trust and personal connection, especially in technical or scientific fields.</li><li>Anecdotes from Cassidy’s work in television, music, and teaching illustrate the central role of storytelling structure and audience feedback, with parallels drawn to professional scientific presentations.</li><li>Alan Alda’s illustration of improv for scientists is discussed as an example of bridging technical expertise with adaptive communication skills.</li></ul><p>For more, visit us at <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </itunes:summary>
      <itunes:keywords>statistical science, clinical trials</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.berryconsultants.com/" img="https://img.transistorcdn.com/D3ZU3jufor08z5PmBAIexD7RKnvumbHpcogC2R-EUbg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82ZmRl/OTVkMzNmNTNkYjQz/MjI2MmZjMzk0Y2I0/NmE2MC5wbmc.jpg">Scott Berry</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/57abc83d/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/57abc83d/transcript.json" type="application/json"/>
    </item>
    <item>
      <title>The Fallacy of Ordinal Endpoints</title>
      <itunes:episode>50</itunes:episode>
      <podcast:episode>50</podcast:episode>
      <itunes:title>The Fallacy of Ordinal Endpoints</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">3105d9aa-c113-4c57-92ea-e2d4cc4e69f0</guid>
      <link>https://share.transistor.fm/s/ca8d72c1</link>
      <description>
        <![CDATA[<p>In this episode of "In the Interim…", Dr. Scott Berry and Dr. Lindsay Berry investigate the statistical foundations and clinical implications of analyzing ordinal endpoints, drawing on experience from major stroke and COVID-19 trials. Discussion centers on the Modified Rankin Scale, DAWN, MR CLEAN, and REMAP-CAP, demonstrating that methods such as proportional odds, dichotomization, and utility weighting all impose explicit or implicit clinical weights on the outcome categories. The episode presents direct mathematical derivations, exposes the equivalence between proportional odds models and value-weighted analysis, and uses real trial data to explore how statistical and clinical perspectives on endpoint weighting may diverge. Emphasis remains on transparency and the need for clinically relevant weight assignment in trial endpoints.</p><p><strong>Key Highlights</strong></p><ul><li>Structural overview and clinical significance of the Modified Rankin Scale scores.</li><li>Illustration that proportional odds models and dichotomized analyses apply hidden, prevalence-driven or threshold-based weights.</li><li>Utility weighting in DAWN, formulated from EQ-5D patient utilities and economic studies, with observed alignment.</li><li>MR CLEAN investigators' critique of utility weighting; empirical data demonstrated relative consistency and challenged the claim that statistical approaches resolve variation across patients.</li><li>REMAP-CAP platform trial: Organ Support Free Days endpoint analyzed with proportional odds imposed weights on the scale from death to free of organ support .</li><li>Extension of these arguments to win ratio/rank-based approaches, with caution that all methods encode clinical assumptions.</li></ul><p>For more, visit us at <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of "In the Interim…", Dr. Scott Berry and Dr. Lindsay Berry investigate the statistical foundations and clinical implications of analyzing ordinal endpoints, drawing on experience from major stroke and COVID-19 trials. Discussion centers on the Modified Rankin Scale, DAWN, MR CLEAN, and REMAP-CAP, demonstrating that methods such as proportional odds, dichotomization, and utility weighting all impose explicit or implicit clinical weights on the outcome categories. The episode presents direct mathematical derivations, exposes the equivalence between proportional odds models and value-weighted analysis, and uses real trial data to explore how statistical and clinical perspectives on endpoint weighting may diverge. Emphasis remains on transparency and the need for clinically relevant weight assignment in trial endpoints.</p><p><strong>Key Highlights</strong></p><ul><li>Structural overview and clinical significance of the Modified Rankin Scale scores.</li><li>Illustration that proportional odds models and dichotomized analyses apply hidden, prevalence-driven or threshold-based weights.</li><li>Utility weighting in DAWN, formulated from EQ-5D patient utilities and economic studies, with observed alignment.</li><li>MR CLEAN investigators' critique of utility weighting; empirical data demonstrated relative consistency and challenged the claim that statistical approaches resolve variation across patients.</li><li>REMAP-CAP platform trial: Organ Support Free Days endpoint analyzed with proportional odds imposed weights on the scale from death to free of organ support .</li><li>Extension of these arguments to win ratio/rank-based approaches, with caution that all methods encode clinical assumptions.</li></ul><p>For more, visit us at <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </content:encoded>
      <pubDate>Mon, 23 Feb 2026 06:00:00 -0600</pubDate>
      <author>Berry</author>
      <enclosure url="https://media.transistor.fm/ca8d72c1/b8582979.mp3" length="42185431" type="audio/mpeg"/>
      <itunes:author>Berry</itunes:author>
      <itunes:duration>2634</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode of "In the Interim…", Dr. Scott Berry and Dr. Lindsay Berry investigate the statistical foundations and clinical implications of analyzing ordinal endpoints, drawing on experience from major stroke and COVID-19 trials. Discussion centers on the Modified Rankin Scale, DAWN, MR CLEAN, and REMAP-CAP, demonstrating that methods such as proportional odds, dichotomization, and utility weighting all impose explicit or implicit clinical weights on the outcome categories. The episode presents direct mathematical derivations, exposes the equivalence between proportional odds models and value-weighted analysis, and uses real trial data to explore how statistical and clinical perspectives on endpoint weighting may diverge. Emphasis remains on transparency and the need for clinically relevant weight assignment in trial endpoints.</p><p><strong>Key Highlights</strong></p><ul><li>Structural overview and clinical significance of the Modified Rankin Scale scores.</li><li>Illustration that proportional odds models and dichotomized analyses apply hidden, prevalence-driven or threshold-based weights.</li><li>Utility weighting in DAWN, formulated from EQ-5D patient utilities and economic studies, with observed alignment.</li><li>MR CLEAN investigators' critique of utility weighting; empirical data demonstrated relative consistency and challenged the claim that statistical approaches resolve variation across patients.</li><li>REMAP-CAP platform trial: Organ Support Free Days endpoint analyzed with proportional odds imposed weights on the scale from death to free of organ support .</li><li>Extension of these arguments to win ratio/rank-based approaches, with caution that all methods encode clinical assumptions.</li></ul><p>For more, visit us at <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </itunes:summary>
      <itunes:keywords>statistical science, clinical trials</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.berryconsultants.com/" img="https://img.transistorcdn.com/D3ZU3jufor08z5PmBAIexD7RKnvumbHpcogC2R-EUbg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82ZmRl/OTVkMzNmNTNkYjQz/MjI2MmZjMzk0Y2I0/NmE2MC5wbmc.jpg">Scott Berry</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/ca8d72c1/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/ca8d72c1/transcript.json" type="application/json"/>
    </item>
    <item>
      <title>Mr. Berry Goes to Washington</title>
      <itunes:episode>49</itunes:episode>
      <podcast:episode>49</podcast:episode>
      <itunes:title>Mr. Berry Goes to Washington</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">52ff7f5c-3662-4b65-965f-a684a99ae3bf</guid>
      <link>https://share.transistor.fm/s/cded2afd</link>
      <description>
        <![CDATA[<p>In this episode of "In the Interim…", Dr. Scott Berry marks the podcast’s one-year anniversary, sharing listener metrics, watch data, and regional engagement. He then delivers a step-by-step analysis of the FDA meeting process, detailing the progression from initial sponsor meeting requests and question submission to briefing book preparation, feedback cycles, and in-person logistics for a Type C meeting at the White Oak facility. Drawing from more than 25 years of trial design and regulatory experience, Scott offers precise guidance on technical preparation, sponsor responsibilities, and common errors in sponsor-FDA dialog, emphasizing what works and what wastes time inside the one-hour meeting constraint. His practical approach focuses on clarity, respect for process, and actionable advice.</p><p><strong>Key Highlights</strong></p><ul><li>Slightly over 30,000 people tuned in during the first year across 45 episodes; about 10,000 via audio, 20,000 via video with a global worldwide reach.</li><li>FDA meeting workflow: request, submit four to eight questions, draft briefing book, receive written feedback, strict one-hour in-person discussion controlled by sponsor.</li><li>Advice on briefing book content, avoiding new materials at the meeting, even what not to bring through the White Oak facility.</li><li>Sponsor pitfalls: disingenuous patient advocacy, asking impossible questions, taking adversarial stance in statistical discussion.</li></ul><p>For more, visit us at <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of "In the Interim…", Dr. Scott Berry marks the podcast’s one-year anniversary, sharing listener metrics, watch data, and regional engagement. He then delivers a step-by-step analysis of the FDA meeting process, detailing the progression from initial sponsor meeting requests and question submission to briefing book preparation, feedback cycles, and in-person logistics for a Type C meeting at the White Oak facility. Drawing from more than 25 years of trial design and regulatory experience, Scott offers precise guidance on technical preparation, sponsor responsibilities, and common errors in sponsor-FDA dialog, emphasizing what works and what wastes time inside the one-hour meeting constraint. His practical approach focuses on clarity, respect for process, and actionable advice.</p><p><strong>Key Highlights</strong></p><ul><li>Slightly over 30,000 people tuned in during the first year across 45 episodes; about 10,000 via audio, 20,000 via video with a global worldwide reach.</li><li>FDA meeting workflow: request, submit four to eight questions, draft briefing book, receive written feedback, strict one-hour in-person discussion controlled by sponsor.</li><li>Advice on briefing book content, avoiding new materials at the meeting, even what not to bring through the White Oak facility.</li><li>Sponsor pitfalls: disingenuous patient advocacy, asking impossible questions, taking adversarial stance in statistical discussion.</li></ul><p>For more, visit us at <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </content:encoded>
      <pubDate>Mon, 16 Feb 2026 06:00:00 -0600</pubDate>
      <author>Berry</author>
      <enclosure url="https://media.transistor.fm/cded2afd/72015a79.mp3" length="45384487" type="audio/mpeg"/>
      <itunes:author>Berry</itunes:author>
      <itunes:duration>2834</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode of "In the Interim…", Dr. Scott Berry marks the podcast’s one-year anniversary, sharing listener metrics, watch data, and regional engagement. He then delivers a step-by-step analysis of the FDA meeting process, detailing the progression from initial sponsor meeting requests and question submission to briefing book preparation, feedback cycles, and in-person logistics for a Type C meeting at the White Oak facility. Drawing from more than 25 years of trial design and regulatory experience, Scott offers precise guidance on technical preparation, sponsor responsibilities, and common errors in sponsor-FDA dialog, emphasizing what works and what wastes time inside the one-hour meeting constraint. His practical approach focuses on clarity, respect for process, and actionable advice.</p><p><strong>Key Highlights</strong></p><ul><li>Slightly over 30,000 people tuned in during the first year across 45 episodes; about 10,000 via audio, 20,000 via video with a global worldwide reach.</li><li>FDA meeting workflow: request, submit four to eight questions, draft briefing book, receive written feedback, strict one-hour in-person discussion controlled by sponsor.</li><li>Advice on briefing book content, avoiding new materials at the meeting, even what not to bring through the White Oak facility.</li><li>Sponsor pitfalls: disingenuous patient advocacy, asking impossible questions, taking adversarial stance in statistical discussion.</li></ul><p>For more, visit us at <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </itunes:summary>
      <itunes:keywords>statistical science, clinical trials</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.berryconsultants.com/" img="https://img.transistorcdn.com/D3ZU3jufor08z5PmBAIexD7RKnvumbHpcogC2R-EUbg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82ZmRl/OTVkMzNmNTNkYjQz/MjI2MmZjMzk0Y2I0/NmE2MC5wbmc.jpg">Scott Berry</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/cded2afd/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/cded2afd/transcript.json" type="application/json"/>
    </item>
    <item>
      <title>Platform Trial in Orthopaedic Surgery</title>
      <itunes:episode>48</itunes:episode>
      <podcast:episode>48</podcast:episode>
      <itunes:title>Platform Trial in Orthopaedic Surgery</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">bacc0b04-dc58-4541-b0bb-87a4847c375e</guid>
      <link>https://share.transistor.fm/s/bc017223</link>
      <description>
        <![CDATA[<p>Dr. Nathan O’Hara (University of Maryland), Dr. Gerard Slobogean (UC Irvine), and Dr. Sheila Sprague (McMaster University) describe the launch and design of the Musculoskeletal Adaptive Platform Trial (MAPT)—the first major adaptive platform trial in orthopaedic surgery. The discussion covers MAPT’s master protocol structure, patient-centered endpoint framework, and operational strategies for multinational implementation. Focus areas include the FASTER-HIP domain’s use of Bayesian modeling with a hierarchical clinical endpoint and the standards established for adaptation, data coordination, and future scalability. Listeners gain insight into a trial infrastructure designed to lower barriers for evidence generation and facilitate ongoing evidence generation in musculoskeletal trauma care.</p><p><strong>Key Highlights</strong></p><ul><li>MAPT as a scalable, master protocol for orthopaedic intervention evaluation</li><li>Hierarchical, patient-centered endpoint (survival, 4-level ambulation, days alive/out of hospital), analyzed with a Bayesian-modeled, non-parametric win ratio</li><li>Domain-specific adaptation thresholds based on clinical differentiation</li><li>Interim analyses after 100 patients, then every 50, informing early adaptation</li><li>40 sites across US, Canada, and Europe, centralized data management at McMaster</li><li>A unified DSMB structure with capacity for domain-specific expertise as needed</li><li>Tiered protocol access: open sharing, collaboration, direct integration</li><li>Infrastructure enables rapid domain addition and multi-investigator participation</li></ul><p>For more, visit us at <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Dr. Nathan O’Hara (University of Maryland), Dr. Gerard Slobogean (UC Irvine), and Dr. Sheila Sprague (McMaster University) describe the launch and design of the Musculoskeletal Adaptive Platform Trial (MAPT)—the first major adaptive platform trial in orthopaedic surgery. The discussion covers MAPT’s master protocol structure, patient-centered endpoint framework, and operational strategies for multinational implementation. Focus areas include the FASTER-HIP domain’s use of Bayesian modeling with a hierarchical clinical endpoint and the standards established for adaptation, data coordination, and future scalability. Listeners gain insight into a trial infrastructure designed to lower barriers for evidence generation and facilitate ongoing evidence generation in musculoskeletal trauma care.</p><p><strong>Key Highlights</strong></p><ul><li>MAPT as a scalable, master protocol for orthopaedic intervention evaluation</li><li>Hierarchical, patient-centered endpoint (survival, 4-level ambulation, days alive/out of hospital), analyzed with a Bayesian-modeled, non-parametric win ratio</li><li>Domain-specific adaptation thresholds based on clinical differentiation</li><li>Interim analyses after 100 patients, then every 50, informing early adaptation</li><li>40 sites across US, Canada, and Europe, centralized data management at McMaster</li><li>A unified DSMB structure with capacity for domain-specific expertise as needed</li><li>Tiered protocol access: open sharing, collaboration, direct integration</li><li>Infrastructure enables rapid domain addition and multi-investigator participation</li></ul><p>For more, visit us at <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </content:encoded>
      <pubDate>Mon, 09 Feb 2026 06:00:00 -0600</pubDate>
      <author>Berry</author>
      <enclosure url="https://media.transistor.fm/bc017223/55465374.mp3" length="39335791" type="audio/mpeg"/>
      <itunes:author>Berry</itunes:author>
      <itunes:duration>2456</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Dr. Nathan O’Hara (University of Maryland), Dr. Gerard Slobogean (UC Irvine), and Dr. Sheila Sprague (McMaster University) describe the launch and design of the Musculoskeletal Adaptive Platform Trial (MAPT)—the first major adaptive platform trial in orthopaedic surgery. The discussion covers MAPT’s master protocol structure, patient-centered endpoint framework, and operational strategies for multinational implementation. Focus areas include the FASTER-HIP domain’s use of Bayesian modeling with a hierarchical clinical endpoint and the standards established for adaptation, data coordination, and future scalability. Listeners gain insight into a trial infrastructure designed to lower barriers for evidence generation and facilitate ongoing evidence generation in musculoskeletal trauma care.</p><p><strong>Key Highlights</strong></p><ul><li>MAPT as a scalable, master protocol for orthopaedic intervention evaluation</li><li>Hierarchical, patient-centered endpoint (survival, 4-level ambulation, days alive/out of hospital), analyzed with a Bayesian-modeled, non-parametric win ratio</li><li>Domain-specific adaptation thresholds based on clinical differentiation</li><li>Interim analyses after 100 patients, then every 50, informing early adaptation</li><li>40 sites across US, Canada, and Europe, centralized data management at McMaster</li><li>A unified DSMB structure with capacity for domain-specific expertise as needed</li><li>Tiered protocol access: open sharing, collaboration, direct integration</li><li>Infrastructure enables rapid domain addition and multi-investigator participation</li></ul><p>For more, visit us at <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </itunes:summary>
      <itunes:keywords>statistical science, clinical trials</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.berryconsultants.com/" img="https://img.transistorcdn.com/D3ZU3jufor08z5PmBAIexD7RKnvumbHpcogC2R-EUbg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82ZmRl/OTVkMzNmNTNkYjQz/MjI2MmZjMzk0Y2I0/NmE2MC5wbmc.jpg">Scott Berry</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/bc017223/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/bc017223/transcript.json" type="application/json"/>
    </item>
    <item>
      <title>A Visit with Michael Harhay</title>
      <itunes:episode>47</itunes:episode>
      <podcast:episode>47</podcast:episode>
      <itunes:title>A Visit with Michael Harhay</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">469d24dc-fb82-4968-9abd-c2dfa9f59698</guid>
      <link>https://share.transistor.fm/s/6489f299</link>
      <description>
        <![CDATA[<p>In this episode of "In the Interim…", Dr. Scott Berry speaks with Dr. Michael Harhay, Associate Professor at the University of Pennsylvania and Director of the Center for Clinical Trials Innovation. The conversation explores Dr. Harhay’s progression through neuroscience, philosophy, epidemiology, and statistics, examining how this academic path shapes his work in clinical trial methodology. They discuss the Center’s role in addressing unresolved methodological questions arising from pragmatic, health system-based trials, including challenges with cluster and factorial randomized designs. The episode focuses on statistical and conceptual issues in endpoint selection for critical care, such as the analysis of informatively truncated outcomes, composite endpoints including organ support-free days, and the application of the win ratio. The increasing use of Bayesian methods in trial design is addressed.</p><p><strong>Key Highlights</strong></p><ul><li>Dr. Harhay’s academic background and transition into clinical trial methodology at Penn.</li><li>The mission of the Center for Clinical Trials Innovation to support methodologic research and training, particularly among statisticians participating in multi-center health system trials.</li><li>Discussion of hospital-level and provider-level randomization strategies in cluster and factorial designs within health systems.</li><li>Ongoing challenges in analysis of composite and informatively truncated endpoints, especially in critical care, exemplified by ventilator-free and organ support-free days.</li><li>Evaluation of analytic strategies including survival average causal effect, composite endpoints, and the win ratio, with emphasis on the need for clinical rather than purely statistical weighting of outcomes.</li><li>Consideration of the conceptual strengths of Bayesian methods and their integration into modern trial design and decision analysis.</li></ul><p>For more, visit us at <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of "In the Interim…", Dr. Scott Berry speaks with Dr. Michael Harhay, Associate Professor at the University of Pennsylvania and Director of the Center for Clinical Trials Innovation. The conversation explores Dr. Harhay’s progression through neuroscience, philosophy, epidemiology, and statistics, examining how this academic path shapes his work in clinical trial methodology. They discuss the Center’s role in addressing unresolved methodological questions arising from pragmatic, health system-based trials, including challenges with cluster and factorial randomized designs. The episode focuses on statistical and conceptual issues in endpoint selection for critical care, such as the analysis of informatively truncated outcomes, composite endpoints including organ support-free days, and the application of the win ratio. The increasing use of Bayesian methods in trial design is addressed.</p><p><strong>Key Highlights</strong></p><ul><li>Dr. Harhay’s academic background and transition into clinical trial methodology at Penn.</li><li>The mission of the Center for Clinical Trials Innovation to support methodologic research and training, particularly among statisticians participating in multi-center health system trials.</li><li>Discussion of hospital-level and provider-level randomization strategies in cluster and factorial designs within health systems.</li><li>Ongoing challenges in analysis of composite and informatively truncated endpoints, especially in critical care, exemplified by ventilator-free and organ support-free days.</li><li>Evaluation of analytic strategies including survival average causal effect, composite endpoints, and the win ratio, with emphasis on the need for clinical rather than purely statistical weighting of outcomes.</li><li>Consideration of the conceptual strengths of Bayesian methods and their integration into modern trial design and decision analysis.</li></ul><p>For more, visit us at <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </content:encoded>
      <pubDate>Mon, 02 Feb 2026 06:00:00 -0600</pubDate>
      <author>Berry</author>
      <enclosure url="https://media.transistor.fm/6489f299/130c901b.mp3" length="37605429" type="audio/mpeg"/>
      <itunes:author>Berry</itunes:author>
      <itunes:duration>2348</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode of "In the Interim…", Dr. Scott Berry speaks with Dr. Michael Harhay, Associate Professor at the University of Pennsylvania and Director of the Center for Clinical Trials Innovation. The conversation explores Dr. Harhay’s progression through neuroscience, philosophy, epidemiology, and statistics, examining how this academic path shapes his work in clinical trial methodology. They discuss the Center’s role in addressing unresolved methodological questions arising from pragmatic, health system-based trials, including challenges with cluster and factorial randomized designs. The episode focuses on statistical and conceptual issues in endpoint selection for critical care, such as the analysis of informatively truncated outcomes, composite endpoints including organ support-free days, and the application of the win ratio. The increasing use of Bayesian methods in trial design is addressed.</p><p><strong>Key Highlights</strong></p><ul><li>Dr. Harhay’s academic background and transition into clinical trial methodology at Penn.</li><li>The mission of the Center for Clinical Trials Innovation to support methodologic research and training, particularly among statisticians participating in multi-center health system trials.</li><li>Discussion of hospital-level and provider-level randomization strategies in cluster and factorial designs within health systems.</li><li>Ongoing challenges in analysis of composite and informatively truncated endpoints, especially in critical care, exemplified by ventilator-free and organ support-free days.</li><li>Evaluation of analytic strategies including survival average causal effect, composite endpoints, and the win ratio, with emphasis on the need for clinical rather than purely statistical weighting of outcomes.</li><li>Consideration of the conceptual strengths of Bayesian methods and their integration into modern trial design and decision analysis.</li></ul><p>For more, visit us at <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </itunes:summary>
      <itunes:keywords>statistical science, clinical trials</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.berryconsultants.com/" img="https://img.transistorcdn.com/D3ZU3jufor08z5PmBAIexD7RKnvumbHpcogC2R-EUbg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82ZmRl/OTVkMzNmNTNkYjQz/MjI2MmZjMzk0Y2I0/NmE2MC5wbmc.jpg">Scott Berry</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/6489f299/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/6489f299/transcript.json" type="application/json"/>
    </item>
    <item>
      <title>The FDA Bayesian Guidance</title>
      <itunes:episode>46</itunes:episode>
      <podcast:episode>46</podcast:episode>
      <itunes:title>The FDA Bayesian Guidance</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">87079f19-c489-4ef8-8bdb-503ff52ba826</guid>
      <link>https://share.transistor.fm/s/d4747470</link>
      <description>
        <![CDATA[<p>In this episode of "In the Interim…", Dr. Scott Berry and Dr. Kert Viele deliver a quick reaction to the FDA’s draft guidance on Bayesian statistics for clinical trials of drugs and biologics. Their assessment addresses the structure, content, and impact of the document, emphasizing evidence-based requirements and guidance scope. The episode breaks down regulatory language, technical expectations, and workflow implications for clinical trial sponsors and statisticians.</p><p><strong>Key Highlights</strong></p><ul><li>Clear distinction between trials justified by type 1 error control and trials justified by agreement on Bayesian priors and decision rule.</li><li>Explanation of how informative priors can be created based on external or historical data.</li><li>Technical explanation of dynamic discounting/borrowing, especially in Bayesian hierarchical models for rare populations, pediatric-adult extrapolation, related disease subgroups, and platform and basket trials (e.g., ROAR).</li><li>In-depth look at the necessity of sensitivity and robustness checks for different priors, and the FDA’s design prior and analysis prior terminology.</li><li>FDA’s requirements for accepting external data sources: data provenance, patient-level comparability, recency, and appropriate covariate adjustments.</li><li>Comparison with ICH E20 on adaptive designs, providing context for ongoing regulatory harmonization and possible influence on international regulatory directions.</li><li>Direct warning against attempts to misuse Bayesian methodology as a substitute for scientific rigor; legitimate uses must meet FDA standards and not simply serve to lower evidentiary bars.</li></ul><p><br><strong>Resource:</strong>  FDA News Release:  <a href="https://www.fda.gov/news-events/press-announcements/fda-issues-guidance-modernizing-statistical-methods-clinical-trials">https://www.fda.gov/news-events/press-announcements/fda-issues-guidance-modernizing-statistical-methods-clinical-trials</a></p><p>For more, visit us at <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of "In the Interim…", Dr. Scott Berry and Dr. Kert Viele deliver a quick reaction to the FDA’s draft guidance on Bayesian statistics for clinical trials of drugs and biologics. Their assessment addresses the structure, content, and impact of the document, emphasizing evidence-based requirements and guidance scope. The episode breaks down regulatory language, technical expectations, and workflow implications for clinical trial sponsors and statisticians.</p><p><strong>Key Highlights</strong></p><ul><li>Clear distinction between trials justified by type 1 error control and trials justified by agreement on Bayesian priors and decision rule.</li><li>Explanation of how informative priors can be created based on external or historical data.</li><li>Technical explanation of dynamic discounting/borrowing, especially in Bayesian hierarchical models for rare populations, pediatric-adult extrapolation, related disease subgroups, and platform and basket trials (e.g., ROAR).</li><li>In-depth look at the necessity of sensitivity and robustness checks for different priors, and the FDA’s design prior and analysis prior terminology.</li><li>FDA’s requirements for accepting external data sources: data provenance, patient-level comparability, recency, and appropriate covariate adjustments.</li><li>Comparison with ICH E20 on adaptive designs, providing context for ongoing regulatory harmonization and possible influence on international regulatory directions.</li><li>Direct warning against attempts to misuse Bayesian methodology as a substitute for scientific rigor; legitimate uses must meet FDA standards and not simply serve to lower evidentiary bars.</li></ul><p><br><strong>Resource:</strong>  FDA News Release:  <a href="https://www.fda.gov/news-events/press-announcements/fda-issues-guidance-modernizing-statistical-methods-clinical-trials">https://www.fda.gov/news-events/press-announcements/fda-issues-guidance-modernizing-statistical-methods-clinical-trials</a></p><p>For more, visit us at <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </content:encoded>
      <pubDate>Mon, 26 Jan 2026 06:00:00 -0600</pubDate>
      <author>Berry</author>
      <enclosure url="https://media.transistor.fm/d4747470/f48e6bc2.mp3" length="41634136" type="audio/mpeg"/>
      <itunes:author>Berry</itunes:author>
      <itunes:duration>2600</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode of "In the Interim…", Dr. Scott Berry and Dr. Kert Viele deliver a quick reaction to the FDA’s draft guidance on Bayesian statistics for clinical trials of drugs and biologics. Their assessment addresses the structure, content, and impact of the document, emphasizing evidence-based requirements and guidance scope. The episode breaks down regulatory language, technical expectations, and workflow implications for clinical trial sponsors and statisticians.</p><p><strong>Key Highlights</strong></p><ul><li>Clear distinction between trials justified by type 1 error control and trials justified by agreement on Bayesian priors and decision rule.</li><li>Explanation of how informative priors can be created based on external or historical data.</li><li>Technical explanation of dynamic discounting/borrowing, especially in Bayesian hierarchical models for rare populations, pediatric-adult extrapolation, related disease subgroups, and platform and basket trials (e.g., ROAR).</li><li>In-depth look at the necessity of sensitivity and robustness checks for different priors, and the FDA’s design prior and analysis prior terminology.</li><li>FDA’s requirements for accepting external data sources: data provenance, patient-level comparability, recency, and appropriate covariate adjustments.</li><li>Comparison with ICH E20 on adaptive designs, providing context for ongoing regulatory harmonization and possible influence on international regulatory directions.</li><li>Direct warning against attempts to misuse Bayesian methodology as a substitute for scientific rigor; legitimate uses must meet FDA standards and not simply serve to lower evidentiary bars.</li></ul><p><br><strong>Resource:</strong>  FDA News Release:  <a href="https://www.fda.gov/news-events/press-announcements/fda-issues-guidance-modernizing-statistical-methods-clinical-trials">https://www.fda.gov/news-events/press-announcements/fda-issues-guidance-modernizing-statistical-methods-clinical-trials</a></p><p>For more, visit us at <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </itunes:summary>
      <itunes:keywords>statistical science, clinical trials</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.berryconsultants.com/" img="https://img.transistorcdn.com/D3ZU3jufor08z5PmBAIexD7RKnvumbHpcogC2R-EUbg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82ZmRl/OTVkMzNmNTNkYjQz/MjI2MmZjMzk0Y2I0/NmE2MC5wbmc.jpg">Scott Berry</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/d4747470/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/d4747470/transcript.json" type="application/json"/>
    </item>
    <item>
      <title>Path 2 Parkinson's Prevention with Drs. Simuni and Wendelberger</title>
      <itunes:episode>45</itunes:episode>
      <podcast:episode>45</podcast:episode>
      <itunes:title>Path 2 Parkinson's Prevention with Drs. Simuni and Wendelberger</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">069f3601-f84c-4d6d-a3aa-a28e80641ec3</guid>
      <link>https://share.transistor.fm/s/ba4249c3</link>
      <description>
        <![CDATA[<p>In this episode of "In the Interim…", Dr. Scott Berry is joined by Dr. Tanya Simuni, Arthur C. Nielsen Jr. Professor of Neurology and Director of the Parkinson’s Disease and Movement Disorders Center at Northwestern University, and Dr. Barbara Wendelberger, Senior Statistical Scientist at Berry Consultants. The conversation focuses on the Path to Prevention (P2P) platform trial—an international, multi-arm prevention study in Parkinson’s disease targeting participants defined by biological markers, specifically alpha-synuclein pathology, prior to clinical diagnosis. The discussion covers the PPMI cohort, trial operational and statistical structure, the rationale behind biomarker-driven inclusion, and the use of Bayesian platform trial design.</p><p><strong>Key Highlights:</strong></p><ul><li>Parkinson’s disease pathobiology and risk: genotype-phenotype variability, multi-system involvement, and the central roles of age, environment, and genetics.</li><li>Michael J. Fox Foundation’s PPMI cohort: 4,000+ participants, prospective longitudinal biomarker and clinical data, high participant retention, enabling study of early Parkinson’s.</li><li>P2P platform structure: multi-arm design, two-stage randomization with shared placebo group, integration of non-randomized PPMI cohort in Bayesian analysis for improved inference.</li><li>Inclusion criteria: prodromal population biologically defined by CSF alpha-synuclein seed amplification and dopaminergic imaging (DAT-SPECT), highlighting regulatory nuances.</li><li>Dual primary endpoints: biomarker (DAT-SPECT) and clinical (MDS-UPDRS Part III), 24-36 months follow-up.</li><li>Commitment to public data sharing in line with the Michael J. Fox Foundation’s open science philosophy.</li></ul><p>For more, visit us at <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of "In the Interim…", Dr. Scott Berry is joined by Dr. Tanya Simuni, Arthur C. Nielsen Jr. Professor of Neurology and Director of the Parkinson’s Disease and Movement Disorders Center at Northwestern University, and Dr. Barbara Wendelberger, Senior Statistical Scientist at Berry Consultants. The conversation focuses on the Path to Prevention (P2P) platform trial—an international, multi-arm prevention study in Parkinson’s disease targeting participants defined by biological markers, specifically alpha-synuclein pathology, prior to clinical diagnosis. The discussion covers the PPMI cohort, trial operational and statistical structure, the rationale behind biomarker-driven inclusion, and the use of Bayesian platform trial design.</p><p><strong>Key Highlights:</strong></p><ul><li>Parkinson’s disease pathobiology and risk: genotype-phenotype variability, multi-system involvement, and the central roles of age, environment, and genetics.</li><li>Michael J. Fox Foundation’s PPMI cohort: 4,000+ participants, prospective longitudinal biomarker and clinical data, high participant retention, enabling study of early Parkinson’s.</li><li>P2P platform structure: multi-arm design, two-stage randomization with shared placebo group, integration of non-randomized PPMI cohort in Bayesian analysis for improved inference.</li><li>Inclusion criteria: prodromal population biologically defined by CSF alpha-synuclein seed amplification and dopaminergic imaging (DAT-SPECT), highlighting regulatory nuances.</li><li>Dual primary endpoints: biomarker (DAT-SPECT) and clinical (MDS-UPDRS Part III), 24-36 months follow-up.</li><li>Commitment to public data sharing in line with the Michael J. Fox Foundation’s open science philosophy.</li></ul><p>For more, visit us at <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </content:encoded>
      <pubDate>Mon, 19 Jan 2026 06:00:00 -0600</pubDate>
      <author>Berry</author>
      <enclosure url="https://media.transistor.fm/ba4249c3/caebd46b.mp3" length="40021688" type="audio/mpeg"/>
      <itunes:author>Berry</itunes:author>
      <itunes:duration>2499</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode of "In the Interim…", Dr. Scott Berry is joined by Dr. Tanya Simuni, Arthur C. Nielsen Jr. Professor of Neurology and Director of the Parkinson’s Disease and Movement Disorders Center at Northwestern University, and Dr. Barbara Wendelberger, Senior Statistical Scientist at Berry Consultants. The conversation focuses on the Path to Prevention (P2P) platform trial—an international, multi-arm prevention study in Parkinson’s disease targeting participants defined by biological markers, specifically alpha-synuclein pathology, prior to clinical diagnosis. The discussion covers the PPMI cohort, trial operational and statistical structure, the rationale behind biomarker-driven inclusion, and the use of Bayesian platform trial design.</p><p><strong>Key Highlights:</strong></p><ul><li>Parkinson’s disease pathobiology and risk: genotype-phenotype variability, multi-system involvement, and the central roles of age, environment, and genetics.</li><li>Michael J. Fox Foundation’s PPMI cohort: 4,000+ participants, prospective longitudinal biomarker and clinical data, high participant retention, enabling study of early Parkinson’s.</li><li>P2P platform structure: multi-arm design, two-stage randomization with shared placebo group, integration of non-randomized PPMI cohort in Bayesian analysis for improved inference.</li><li>Inclusion criteria: prodromal population biologically defined by CSF alpha-synuclein seed amplification and dopaminergic imaging (DAT-SPECT), highlighting regulatory nuances.</li><li>Dual primary endpoints: biomarker (DAT-SPECT) and clinical (MDS-UPDRS Part III), 24-36 months follow-up.</li><li>Commitment to public data sharing in line with the Michael J. Fox Foundation’s open science philosophy.</li></ul><p>For more, visit us at <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </itunes:summary>
      <itunes:keywords>statistical science, clinical trials</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.berryconsultants.com/" img="https://img.transistorcdn.com/D3ZU3jufor08z5PmBAIexD7RKnvumbHpcogC2R-EUbg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82ZmRl/OTVkMzNmNTNkYjQz/MjI2MmZjMzk0Y2I0/NmE2MC5wbmc.jpg">Scott Berry</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/ba4249c3/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/ba4249c3/transcript.json" type="application/json"/>
    </item>
    <item>
      <title>Statistical Communication</title>
      <itunes:episode>44</itunes:episode>
      <podcast:episode>44</podcast:episode>
      <itunes:title>Statistical Communication</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">5c6169d7-975b-443c-9bb0-0417872c70a4</guid>
      <link>https://share.transistor.fm/s/065668e0</link>
      <description>
        <![CDATA[<p>In this episode of “In the Interim…,” host Dr. Scott Berry examines the challenge of communicating complex statistical concepts to non-statistical audiences. Drawing from firsthand experiences in agriculture, professional golf, and clinical development, as well as examples involving historical and scientific figures, Scott reflects on why technical rigor alone often fails to influence. The discussion focuses on the consequences of mismatched language, the importance of empathy, and the utility of simulation when bridging the gap between analysis and stakeholder understanding.</p><p><strong>Key Highlights</strong></p><ul><li>Illustrated barriers to statistical communication using stories from farming, golf, and early career encounters.</li><li>Examples involving John Glenn, Ada Lovelace, and Charles Babbage show how communication, not just science, determines impact.</li><li>Insights from Alan Alda on empathy as a foundational tool for scientists presenting technical ideas.</li><li>Clinical trial simulations revealed knowledge gaps—such as misunderstanding of power—when communicating with decision-makers.</li><li>Emphasizes the necessity of translating analytic outputs into operational, financial, or clinical language for meaningful impact.</li></ul><p>For more, visit us at <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of “In the Interim…,” host Dr. Scott Berry examines the challenge of communicating complex statistical concepts to non-statistical audiences. Drawing from firsthand experiences in agriculture, professional golf, and clinical development, as well as examples involving historical and scientific figures, Scott reflects on why technical rigor alone often fails to influence. The discussion focuses on the consequences of mismatched language, the importance of empathy, and the utility of simulation when bridging the gap between analysis and stakeholder understanding.</p><p><strong>Key Highlights</strong></p><ul><li>Illustrated barriers to statistical communication using stories from farming, golf, and early career encounters.</li><li>Examples involving John Glenn, Ada Lovelace, and Charles Babbage show how communication, not just science, determines impact.</li><li>Insights from Alan Alda on empathy as a foundational tool for scientists presenting technical ideas.</li><li>Clinical trial simulations revealed knowledge gaps—such as misunderstanding of power—when communicating with decision-makers.</li><li>Emphasizes the necessity of translating analytic outputs into operational, financial, or clinical language for meaningful impact.</li></ul><p>For more, visit us at <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </content:encoded>
      <pubDate>Mon, 12 Jan 2026 06:00:00 -0600</pubDate>
      <author>Berry</author>
      <enclosure url="https://media.transistor.fm/065668e0/c50f5eac.mp3" length="39899605" type="audio/mpeg"/>
      <itunes:author>Berry</itunes:author>
      <itunes:duration>2491</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode of “In the Interim…,” host Dr. Scott Berry examines the challenge of communicating complex statistical concepts to non-statistical audiences. Drawing from firsthand experiences in agriculture, professional golf, and clinical development, as well as examples involving historical and scientific figures, Scott reflects on why technical rigor alone often fails to influence. The discussion focuses on the consequences of mismatched language, the importance of empathy, and the utility of simulation when bridging the gap between analysis and stakeholder understanding.</p><p><strong>Key Highlights</strong></p><ul><li>Illustrated barriers to statistical communication using stories from farming, golf, and early career encounters.</li><li>Examples involving John Glenn, Ada Lovelace, and Charles Babbage show how communication, not just science, determines impact.</li><li>Insights from Alan Alda on empathy as a foundational tool for scientists presenting technical ideas.</li><li>Clinical trial simulations revealed knowledge gaps—such as misunderstanding of power—when communicating with decision-makers.</li><li>Emphasizes the necessity of translating analytic outputs into operational, financial, or clinical language for meaningful impact.</li></ul><p>For more, visit us at <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </itunes:summary>
      <itunes:keywords>statistical science, clinical trials</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.berryconsultants.com/" img="https://img.transistorcdn.com/D3ZU3jufor08z5PmBAIexD7RKnvumbHpcogC2R-EUbg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82ZmRl/OTVkMzNmNTNkYjQz/MjI2MmZjMzk0Y2I0/NmE2MC5wbmc.jpg">Scott Berry</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/065668e0/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/065668e0/transcript.json" type="application/json"/>
    </item>
    <item>
      <title>The Rumor of One Trial for Substantial Evidence</title>
      <itunes:episode>43</itunes:episode>
      <podcast:episode>43</podcast:episode>
      <itunes:title>The Rumor of One Trial for Substantial Evidence</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">bf0ce81a-9e91-47b1-aa5c-4121a6a84467</guid>
      <link>https://share.transistor.fm/s/899e9cbc</link>
      <description>
        <![CDATA[<p>In this episode of "In the Interim…", host Dr. Scott Berry and frequent co-host Dr. Kert Viele, Senior Statistical Scientist at Berry Consultants, analyze the potential shift in FDA regulatory policy from requiring two independent trials to accepting a single trial as sufficient for “substantial evidence” in drug approvals. Reflecting on the statutory and regulatory definitions originating with the 1962 Federal Food, Drug, and Cosmetic Act and 21 CFR 314.126, they dissect current and emerging interpretations, referencing recent statements by Dr. Martin Makary and coverage described in a STAT article. The conversation focuses on the scientific and statistical foundations of the two-trial threshold, challenges with dichotomous results, and how pooled evidence might increase efficiency and rigor. They discuss statistical implications including alpha thresholds, sample size effects, program power, and the consequences for clinical labeling. The episode also introduces Bayesian approaches as a method for integrating totality of evidence. Attention is given to both population breadth and the possible risks of a narrowed evidentiary base under a single-trial standard.</p><p><strong>Key Highlights</strong></p><ul><li>Regulatory and historical context of “substantial evidence” since 1962 and current FDA directives.</li><li>Industry practice: simultaneous Phase III trials, statistical power, and evidentiary replication.</li><li>Criticism of binary, trial-level significance thresholds; merits of pooling or meta-analysis.</li><li>Potential efficiency gains and tradeoffs with a more stringent alpha requirement for single trials.</li><li>Strategic and operational effects on trial design, sample size, and label indications.</li><li>Bayesian statistical approaches for full evidence integration, discussed as an analytical viewpoint.</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of "In the Interim…", host Dr. Scott Berry and frequent co-host Dr. Kert Viele, Senior Statistical Scientist at Berry Consultants, analyze the potential shift in FDA regulatory policy from requiring two independent trials to accepting a single trial as sufficient for “substantial evidence” in drug approvals. Reflecting on the statutory and regulatory definitions originating with the 1962 Federal Food, Drug, and Cosmetic Act and 21 CFR 314.126, they dissect current and emerging interpretations, referencing recent statements by Dr. Martin Makary and coverage described in a STAT article. The conversation focuses on the scientific and statistical foundations of the two-trial threshold, challenges with dichotomous results, and how pooled evidence might increase efficiency and rigor. They discuss statistical implications including alpha thresholds, sample size effects, program power, and the consequences for clinical labeling. The episode also introduces Bayesian approaches as a method for integrating totality of evidence. Attention is given to both population breadth and the possible risks of a narrowed evidentiary base under a single-trial standard.</p><p><strong>Key Highlights</strong></p><ul><li>Regulatory and historical context of “substantial evidence” since 1962 and current FDA directives.</li><li>Industry practice: simultaneous Phase III trials, statistical power, and evidentiary replication.</li><li>Criticism of binary, trial-level significance thresholds; merits of pooling or meta-analysis.</li><li>Potential efficiency gains and tradeoffs with a more stringent alpha requirement for single trials.</li><li>Strategic and operational effects on trial design, sample size, and label indications.</li><li>Bayesian statistical approaches for full evidence integration, discussed as an analytical viewpoint.</li></ul>]]>
      </content:encoded>
      <pubDate>Mon, 29 Dec 2025 07:02:03 -0600</pubDate>
      <author>Berry</author>
      <enclosure url="https://media.transistor.fm/899e9cbc/a393b38a.mp3" length="38612220" type="audio/mpeg"/>
      <itunes:author>Berry</itunes:author>
      <itunes:duration>2411</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode of "In the Interim…", host Dr. Scott Berry and frequent co-host Dr. Kert Viele, Senior Statistical Scientist at Berry Consultants, analyze the potential shift in FDA regulatory policy from requiring two independent trials to accepting a single trial as sufficient for “substantial evidence” in drug approvals. Reflecting on the statutory and regulatory definitions originating with the 1962 Federal Food, Drug, and Cosmetic Act and 21 CFR 314.126, they dissect current and emerging interpretations, referencing recent statements by Dr. Martin Makary and coverage described in a STAT article. The conversation focuses on the scientific and statistical foundations of the two-trial threshold, challenges with dichotomous results, and how pooled evidence might increase efficiency and rigor. They discuss statistical implications including alpha thresholds, sample size effects, program power, and the consequences for clinical labeling. The episode also introduces Bayesian approaches as a method for integrating totality of evidence. Attention is given to both population breadth and the possible risks of a narrowed evidentiary base under a single-trial standard.</p><p><strong>Key Highlights</strong></p><ul><li>Regulatory and historical context of “substantial evidence” since 1962 and current FDA directives.</li><li>Industry practice: simultaneous Phase III trials, statistical power, and evidentiary replication.</li><li>Criticism of binary, trial-level significance thresholds; merits of pooling or meta-analysis.</li><li>Potential efficiency gains and tradeoffs with a more stringent alpha requirement for single trials.</li><li>Strategic and operational effects on trial design, sample size, and label indications.</li><li>Bayesian statistical approaches for full evidence integration, discussed as an analytical viewpoint.</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>statistical science, clinical trials</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.berryconsultants.com/" img="https://img.transistorcdn.com/D3ZU3jufor08z5PmBAIexD7RKnvumbHpcogC2R-EUbg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82ZmRl/OTVkMzNmNTNkYjQz/MjI2MmZjMzk0Y2I0/NmE2MC5wbmc.jpg">Scott Berry</podcast:person>
    </item>
    <item>
      <title>Communication for Scientists: A Discussion with Jenny Devenport</title>
      <itunes:episode>42</itunes:episode>
      <podcast:episode>42</podcast:episode>
      <itunes:title>Communication for Scientists: A Discussion with Jenny Devenport</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">60d53b16-de71-471c-b29c-4883178fc9f4</guid>
      <link>https://share.transistor.fm/s/c57f14d3</link>
      <description>
        <![CDATA[<p>In this episode of "In the Interim…", Dr. Jenny Devenport, Global Head of Methods, Collaboration, and Outreach at Roche, joins Dr. Scott Berry for a detailed discussion on career evolution, statistical culture, and communication in the pharmaceutical industry. Dr. Devenport describes her transition from psychology in New Mexico to statistical leadership in Basel, emphasizing the formative role of early academic mentors and her experience working across the US and Europe. She outlines her current functions in methods development, internal collaboration, and industry outreach, highlighting active engagement with academic and regulatory communities. The episode scrutinizes differences in workplace culture, such as the emphasis on debate and long-term collaboration in Europe, and differences in educational backgrounds among statisticians. The conversation covers practical barriers and slow adoption of Bayesian methods and the importance of communication in the acceptance of futility analyses in pharma, the importance of scale in problem-solving, and the emergence of AI as a tool for statisticians. Dr. Devenport provides pragmatic strategies for statisticians to improve their influence through tailored, audience-specific communication.</p><p><strong>Key Highlights</strong></p><ul><li>Dr. Devenport’s academic and geographic move from the US to Europe</li><li>Responsibilities in methods development, collaboration, and outreach at Roche</li><li>Contrasts in US and European pharmaceutical statistics cultures</li><li>Measured perspective on AI’s effect on statisticians’ responsibilities</li><li>Practical guidance for statisticians on communication and influence</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of "In the Interim…", Dr. Jenny Devenport, Global Head of Methods, Collaboration, and Outreach at Roche, joins Dr. Scott Berry for a detailed discussion on career evolution, statistical culture, and communication in the pharmaceutical industry. Dr. Devenport describes her transition from psychology in New Mexico to statistical leadership in Basel, emphasizing the formative role of early academic mentors and her experience working across the US and Europe. She outlines her current functions in methods development, internal collaboration, and industry outreach, highlighting active engagement with academic and regulatory communities. The episode scrutinizes differences in workplace culture, such as the emphasis on debate and long-term collaboration in Europe, and differences in educational backgrounds among statisticians. The conversation covers practical barriers and slow adoption of Bayesian methods and the importance of communication in the acceptance of futility analyses in pharma, the importance of scale in problem-solving, and the emergence of AI as a tool for statisticians. Dr. Devenport provides pragmatic strategies for statisticians to improve their influence through tailored, audience-specific communication.</p><p><strong>Key Highlights</strong></p><ul><li>Dr. Devenport’s academic and geographic move from the US to Europe</li><li>Responsibilities in methods development, collaboration, and outreach at Roche</li><li>Contrasts in US and European pharmaceutical statistics cultures</li><li>Measured perspective on AI’s effect on statisticians’ responsibilities</li><li>Practical guidance for statisticians on communication and influence</li></ul>]]>
      </content:encoded>
      <pubDate>Mon, 22 Dec 2025 09:43:38 -0600</pubDate>
      <author>Berry</author>
      <enclosure url="https://media.transistor.fm/c57f14d3/aeeca1f8.mp3" length="38071413" type="audio/mpeg"/>
      <itunes:author>Berry</itunes:author>
      <itunes:duration>2377</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode of "In the Interim…", Dr. Jenny Devenport, Global Head of Methods, Collaboration, and Outreach at Roche, joins Dr. Scott Berry for a detailed discussion on career evolution, statistical culture, and communication in the pharmaceutical industry. Dr. Devenport describes her transition from psychology in New Mexico to statistical leadership in Basel, emphasizing the formative role of early academic mentors and her experience working across the US and Europe. She outlines her current functions in methods development, internal collaboration, and industry outreach, highlighting active engagement with academic and regulatory communities. The episode scrutinizes differences in workplace culture, such as the emphasis on debate and long-term collaboration in Europe, and differences in educational backgrounds among statisticians. The conversation covers practical barriers and slow adoption of Bayesian methods and the importance of communication in the acceptance of futility analyses in pharma, the importance of scale in problem-solving, and the emergence of AI as a tool for statisticians. Dr. Devenport provides pragmatic strategies for statisticians to improve their influence through tailored, audience-specific communication.</p><p><strong>Key Highlights</strong></p><ul><li>Dr. Devenport’s academic and geographic move from the US to Europe</li><li>Responsibilities in methods development, collaboration, and outreach at Roche</li><li>Contrasts in US and European pharmaceutical statistics cultures</li><li>Measured perspective on AI’s effect on statisticians’ responsibilities</li><li>Practical guidance for statisticians on communication and influence</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>statistical science, clinical trials</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.berryconsultants.com/" img="https://img.transistorcdn.com/D3ZU3jufor08z5PmBAIexD7RKnvumbHpcogC2R-EUbg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82ZmRl/OTVkMzNmNTNkYjQz/MjI2MmZjMzk0Y2I0/NmE2MC5wbmc.jpg">Scott Berry</podcast:person>
    </item>
    <item>
      <title>Navigating the Arena: Platform Trials</title>
      <itunes:episode>41</itunes:episode>
      <podcast:episode>41</podcast:episode>
      <itunes:title>Navigating the Arena: Platform Trials</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">1da9f98e-c199-419e-9730-f0e0c7fca570</guid>
      <link>https://share.transistor.fm/s/02a02637</link>
      <description>
        <![CDATA[<p>In this episode of "In the Interim…", Dr. Scott Berry delivers a metaphoric critique of single-question trial infrastructure through the sports arena analogy, illustrating the cost, patient burden, and data inefficiency of conventional clinical trials. He provides a methodical comparison of traditional trial models and the platform trial approach, clarifying distinctions between platform, basket, and master protocol structures. Through examples from HEALEY ALS, I-SPY 2, PALM (Ebola), REMAP-CAP, RECOVERY, EPAD, GBM AGILE, and Precision Promise, Scott outlines the measurable efficiencies of platform trials: shared control arms, flexible arm addition and removal, reduced placebo exposure, accelerated timelines, and improved statistical inferences. The episode further examines platform trial performance during the COVID-19 pandemic, highlighting  trial adaptability, and the rapid generation of actionable evidence. Scott also addresses failure scenarios, focusing on EPAD Alzheimer’s as a cautionary case in platform sustainability, cost allocation, and initial funding barriers. Listeners will gain a perspective on the operational and statistical design choices governing today’s most innovative clinical studies.</p><p><strong>Key Highlights</strong></p><ul><li>Arena analogy applied to delineate clinical research inefficiency.</li><li>Operational, statistical, and patient-focused efficiencies in platform versus single-question trials.</li><li>Precision in terminology: platform, basket, and master protocol definitions.</li><li>Effects of platform trials on speed and scientific rigor.</li><li>Factors underlying both platform trial successes and failures.</li></ul><p>For more, visit us at <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of "In the Interim…", Dr. Scott Berry delivers a metaphoric critique of single-question trial infrastructure through the sports arena analogy, illustrating the cost, patient burden, and data inefficiency of conventional clinical trials. He provides a methodical comparison of traditional trial models and the platform trial approach, clarifying distinctions between platform, basket, and master protocol structures. Through examples from HEALEY ALS, I-SPY 2, PALM (Ebola), REMAP-CAP, RECOVERY, EPAD, GBM AGILE, and Precision Promise, Scott outlines the measurable efficiencies of platform trials: shared control arms, flexible arm addition and removal, reduced placebo exposure, accelerated timelines, and improved statistical inferences. The episode further examines platform trial performance during the COVID-19 pandemic, highlighting  trial adaptability, and the rapid generation of actionable evidence. Scott also addresses failure scenarios, focusing on EPAD Alzheimer’s as a cautionary case in platform sustainability, cost allocation, and initial funding barriers. Listeners will gain a perspective on the operational and statistical design choices governing today’s most innovative clinical studies.</p><p><strong>Key Highlights</strong></p><ul><li>Arena analogy applied to delineate clinical research inefficiency.</li><li>Operational, statistical, and patient-focused efficiencies in platform versus single-question trials.</li><li>Precision in terminology: platform, basket, and master protocol definitions.</li><li>Effects of platform trials on speed and scientific rigor.</li><li>Factors underlying both platform trial successes and failures.</li></ul><p>For more, visit us at <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </content:encoded>
      <pubDate>Mon, 15 Dec 2025 06:00:00 -0600</pubDate>
      <author>Berry</author>
      <enclosure url="https://media.transistor.fm/02a02637/fb79125c.mp3" length="48461093" type="audio/mpeg"/>
      <itunes:author>Berry</itunes:author>
      <itunes:duration>3027</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode of "In the Interim…", Dr. Scott Berry delivers a metaphoric critique of single-question trial infrastructure through the sports arena analogy, illustrating the cost, patient burden, and data inefficiency of conventional clinical trials. He provides a methodical comparison of traditional trial models and the platform trial approach, clarifying distinctions between platform, basket, and master protocol structures. Through examples from HEALEY ALS, I-SPY 2, PALM (Ebola), REMAP-CAP, RECOVERY, EPAD, GBM AGILE, and Precision Promise, Scott outlines the measurable efficiencies of platform trials: shared control arms, flexible arm addition and removal, reduced placebo exposure, accelerated timelines, and improved statistical inferences. The episode further examines platform trial performance during the COVID-19 pandemic, highlighting  trial adaptability, and the rapid generation of actionable evidence. Scott also addresses failure scenarios, focusing on EPAD Alzheimer’s as a cautionary case in platform sustainability, cost allocation, and initial funding barriers. Listeners will gain a perspective on the operational and statistical design choices governing today’s most innovative clinical studies.</p><p><strong>Key Highlights</strong></p><ul><li>Arena analogy applied to delineate clinical research inefficiency.</li><li>Operational, statistical, and patient-focused efficiencies in platform versus single-question trials.</li><li>Precision in terminology: platform, basket, and master protocol definitions.</li><li>Effects of platform trials on speed and scientific rigor.</li><li>Factors underlying both platform trial successes and failures.</li></ul><p>For more, visit us at <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </itunes:summary>
      <itunes:keywords>statistical science, clinical trials</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.berryconsultants.com/" img="https://img.transistorcdn.com/D3ZU3jufor08z5PmBAIexD7RKnvumbHpcogC2R-EUbg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82ZmRl/OTVkMzNmNTNkYjQz/MjI2MmZjMzk0Y2I0/NmE2MC5wbmc.jpg">Scott Berry</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/02a02637/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/02a02637/transcript.json" type="application/json"/>
    </item>
    <item>
      <title>Jumping  Hurdles: Interim Analyses for Funding Decisions</title>
      <itunes:episode>40</itunes:episode>
      <podcast:episode>40</podcast:episode>
      <itunes:title>Jumping  Hurdles: Interim Analyses for Funding Decisions</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">a82dac46-1463-412e-a7de-40a40821faaa</guid>
      <link>https://share.transistor.fm/s/1af92728</link>
      <description>
        <![CDATA[<p>In episode 40 of "In the Interim…", Dr. Scott Berry examines the statistical, operational, and behavioral challenges of using interim analyses as triggers for funding in adaptive and seamless Phase II/III clinical trials. The episode presents a typical hypothetical scenario for rare disease drug development, contrasting conventional two-stage development with a seamless design and highlighting efficiency gains in sample size, patient allocation, and trial duration. Scott details the construction of administrative (financial) interim analyses, underscoring their distinction from futility analyses and their role in funding decisions when complete funding is not secured upfront. He addresses FDA operational bias concerns, emphasizing blinding and limiting information sharing to protect trial integrity. Finally, the episode focuses on developing objective interim funding criteria—using Bayesian predictive probability and assurance—and on leveraging illustrative simulation outputs and sample datasets to bridge the “I’ll know it when I see it” divide between scientists and funders. Practical, empirical, and tailored to real funding barriers in clinical research.</p><p><strong>Key Highlights</strong></p><ul><li>Statistical structure and efficiency of seamless Phase II/III trial designs</li><li>Administrative (financial) interim analysis setup as funding decision triggers, distinct from futility analyses</li><li>FDA operational bias guidance and requirements for trial blinding</li><li>Predictive probability and assurance as objective interim criteria</li><li>Sample data and simulation outputs to facilitate stakeholder alignment</li></ul><p>For more, visit us at <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In episode 40 of "In the Interim…", Dr. Scott Berry examines the statistical, operational, and behavioral challenges of using interim analyses as triggers for funding in adaptive and seamless Phase II/III clinical trials. The episode presents a typical hypothetical scenario for rare disease drug development, contrasting conventional two-stage development with a seamless design and highlighting efficiency gains in sample size, patient allocation, and trial duration. Scott details the construction of administrative (financial) interim analyses, underscoring their distinction from futility analyses and their role in funding decisions when complete funding is not secured upfront. He addresses FDA operational bias concerns, emphasizing blinding and limiting information sharing to protect trial integrity. Finally, the episode focuses on developing objective interim funding criteria—using Bayesian predictive probability and assurance—and on leveraging illustrative simulation outputs and sample datasets to bridge the “I’ll know it when I see it” divide between scientists and funders. Practical, empirical, and tailored to real funding barriers in clinical research.</p><p><strong>Key Highlights</strong></p><ul><li>Statistical structure and efficiency of seamless Phase II/III trial designs</li><li>Administrative (financial) interim analysis setup as funding decision triggers, distinct from futility analyses</li><li>FDA operational bias guidance and requirements for trial blinding</li><li>Predictive probability and assurance as objective interim criteria</li><li>Sample data and simulation outputs to facilitate stakeholder alignment</li></ul><p>For more, visit us at <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </content:encoded>
      <pubDate>Mon, 08 Dec 2025 06:00:00 -0600</pubDate>
      <author>Berry</author>
      <enclosure url="https://media.transistor.fm/1af92728/51f9db2f.mp3" length="40663248" type="audio/mpeg"/>
      <itunes:author>Berry</itunes:author>
      <itunes:duration>2539</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In episode 40 of "In the Interim…", Dr. Scott Berry examines the statistical, operational, and behavioral challenges of using interim analyses as triggers for funding in adaptive and seamless Phase II/III clinical trials. The episode presents a typical hypothetical scenario for rare disease drug development, contrasting conventional two-stage development with a seamless design and highlighting efficiency gains in sample size, patient allocation, and trial duration. Scott details the construction of administrative (financial) interim analyses, underscoring their distinction from futility analyses and their role in funding decisions when complete funding is not secured upfront. He addresses FDA operational bias concerns, emphasizing blinding and limiting information sharing to protect trial integrity. Finally, the episode focuses on developing objective interim funding criteria—using Bayesian predictive probability and assurance—and on leveraging illustrative simulation outputs and sample datasets to bridge the “I’ll know it when I see it” divide between scientists and funders. Practical, empirical, and tailored to real funding barriers in clinical research.</p><p><strong>Key Highlights</strong></p><ul><li>Statistical structure and efficiency of seamless Phase II/III trial designs</li><li>Administrative (financial) interim analysis setup as funding decision triggers, distinct from futility analyses</li><li>FDA operational bias guidance and requirements for trial blinding</li><li>Predictive probability and assurance as objective interim criteria</li><li>Sample data and simulation outputs to facilitate stakeholder alignment</li></ul><p>For more, visit us at <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </itunes:summary>
      <itunes:keywords>statistical science, clinical trials</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.berryconsultants.com/" img="https://img.transistorcdn.com/D3ZU3jufor08z5PmBAIexD7RKnvumbHpcogC2R-EUbg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82ZmRl/OTVkMzNmNTNkYjQz/MjI2MmZjMzk0Y2I0/NmE2MC5wbmc.jpg">Scott Berry</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/1af92728/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/1af92728/transcript.json" type="application/json"/>
    </item>
    <item>
      <title>Discussion with Kaspar Rufibach</title>
      <itunes:episode>39</itunes:episode>
      <podcast:episode>39</podcast:episode>
      <itunes:title>Discussion with Kaspar Rufibach</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">80b8c8d0-fe20-4417-a013-015def79d966</guid>
      <link>https://share.transistor.fm/s/f394f058</link>
      <description>
        <![CDATA[<p>In this episode of "In the Interim...", Dr. Scott Berry interviews Dr. Kaspar Rufibach, Co-Head of Advanced Biostatistical Sciences at Merck. The conversation tracks Rufibach’s evolution from academic training in actuarial and mathematical statistics through cancer research collaborations, postdoctoral work, and academic consulting, leading to applied roles in Roche and Merck. Discussion centers on methodological rigor, pragmatic approaches to assurance and predictive probability, and real-world experience in drug development. Rufibach examines the organizational integration of quantitative disciplines at Merck—incorporating pharmacology, real-world data, statistics, programming, and data science—while remaining candid on the role and boundaries of AI in current pharmaceutical practice.</p><p><strong>Key Highlights</strong></p><ul><li>Statistical education in Switzerland, bridging theory and early applied cancer trial experience</li><li>Move from academic consulting to a trial statistician role at Roche, emphasizing structured problem-solving in drug development</li><li>Approach to predictive probability and assurance, balancing Bayesian and frequentist tools with strict emphasis on practicality</li><li>Formation of professional special interest groups with EFSPI and PSI, stepping in to address unmet community needs rather than seeking formal leadership</li><li>Perspective on Merck’s unified quantitative department, designed to remove silos and leverage interdisciplinary expertise</li><li>Cautious view of AI as a complement to specific tasks, but not yet a replacement for nuanced clinical trial design or regulatory-facing strategies</li><li>Current focus on expanding causal inference methods and multi-state modeling for improved trial efficiency and evidence synthesis</li></ul><p>For more, visit us at <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of "In the Interim...", Dr. Scott Berry interviews Dr. Kaspar Rufibach, Co-Head of Advanced Biostatistical Sciences at Merck. The conversation tracks Rufibach’s evolution from academic training in actuarial and mathematical statistics through cancer research collaborations, postdoctoral work, and academic consulting, leading to applied roles in Roche and Merck. Discussion centers on methodological rigor, pragmatic approaches to assurance and predictive probability, and real-world experience in drug development. Rufibach examines the organizational integration of quantitative disciplines at Merck—incorporating pharmacology, real-world data, statistics, programming, and data science—while remaining candid on the role and boundaries of AI in current pharmaceutical practice.</p><p><strong>Key Highlights</strong></p><ul><li>Statistical education in Switzerland, bridging theory and early applied cancer trial experience</li><li>Move from academic consulting to a trial statistician role at Roche, emphasizing structured problem-solving in drug development</li><li>Approach to predictive probability and assurance, balancing Bayesian and frequentist tools with strict emphasis on practicality</li><li>Formation of professional special interest groups with EFSPI and PSI, stepping in to address unmet community needs rather than seeking formal leadership</li><li>Perspective on Merck’s unified quantitative department, designed to remove silos and leverage interdisciplinary expertise</li><li>Cautious view of AI as a complement to specific tasks, but not yet a replacement for nuanced clinical trial design or regulatory-facing strategies</li><li>Current focus on expanding causal inference methods and multi-state modeling for improved trial efficiency and evidence synthesis</li></ul><p>For more, visit us at <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </content:encoded>
      <pubDate>Mon, 01 Dec 2025 06:00:00 -0600</pubDate>
      <author>Berry</author>
      <enclosure url="https://media.transistor.fm/f394f058/a0737322.mp3" length="45439242" type="audio/mpeg"/>
      <itunes:author>Berry</itunes:author>
      <itunes:duration>2838</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode of "In the Interim...", Dr. Scott Berry interviews Dr. Kaspar Rufibach, Co-Head of Advanced Biostatistical Sciences at Merck. The conversation tracks Rufibach’s evolution from academic training in actuarial and mathematical statistics through cancer research collaborations, postdoctoral work, and academic consulting, leading to applied roles in Roche and Merck. Discussion centers on methodological rigor, pragmatic approaches to assurance and predictive probability, and real-world experience in drug development. Rufibach examines the organizational integration of quantitative disciplines at Merck—incorporating pharmacology, real-world data, statistics, programming, and data science—while remaining candid on the role and boundaries of AI in current pharmaceutical practice.</p><p><strong>Key Highlights</strong></p><ul><li>Statistical education in Switzerland, bridging theory and early applied cancer trial experience</li><li>Move from academic consulting to a trial statistician role at Roche, emphasizing structured problem-solving in drug development</li><li>Approach to predictive probability and assurance, balancing Bayesian and frequentist tools with strict emphasis on practicality</li><li>Formation of professional special interest groups with EFSPI and PSI, stepping in to address unmet community needs rather than seeking formal leadership</li><li>Perspective on Merck’s unified quantitative department, designed to remove silos and leverage interdisciplinary expertise</li><li>Cautious view of AI as a complement to specific tasks, but not yet a replacement for nuanced clinical trial design or regulatory-facing strategies</li><li>Current focus on expanding causal inference methods and multi-state modeling for improved trial efficiency and evidence synthesis</li></ul><p>For more, visit us at <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </itunes:summary>
      <itunes:keywords>statistical science, clinical trials</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.berryconsultants.com/" img="https://img.transistorcdn.com/D3ZU3jufor08z5PmBAIexD7RKnvumbHpcogC2R-EUbg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82ZmRl/OTVkMzNmNTNkYjQz/MjI2MmZjMzk0Y2I0/NmE2MC5wbmc.jpg">Scott Berry</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/f394f058/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/f394f058/transcript.json" type="application/json"/>
    </item>
    <item>
      <title>Bayesian Statistics in Clinical trials: The Past, Present, and Future</title>
      <itunes:episode>38</itunes:episode>
      <podcast:episode>38</podcast:episode>
      <itunes:title>Bayesian Statistics in Clinical trials: The Past, Present, and Future</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">8cc06159-25a5-4464-80e8-6c65baaa5daa</guid>
      <link>https://share.transistor.fm/s/91f4d6e5</link>
      <description>
        <![CDATA[<p>In this episode of "In the Interim…" guest host Cooper Berry moderates a detailed discussion on the evolution and practice of Bayesian methodology in clinical trials with fellow family members Dr. Don Berry, Dr. Scott Berry, Dr. Lindsay Berry, and Dr. Nick Berry. The panel outlines the foundational principles of Bayesian decision-making in medical research, ethical debates informed by historical reports like the Belmont Report, and the shift in regulatory acceptance. Computational developments such as Markov Chain Monte Carlo (MCMC) are examined for their role in enabling applied Bayesian models. Panelists give practical accounts of implementing adaptive and platform trials, including I-SPY 2 and REMAP-CAP, and analyze challenges faced during the COVID-19 pandemic. The implications of Bayesian statistics in artificial intelligence and contemporary clinical decision-making are explored, highlighting ongoing shifts in trial design and evidence synthesis. Each discussion is grounded in direct experience and technical rigor, providing insight into both the operational realities and future trajectory of Bayesian-driven methods in clinical research.</p><p><strong>Key Highlights:</strong></p><ul><li>Historical development of Bayesian clinical trial design and foundational influence from Leonard J. Savage to current methods</li><li>Ethical tension in trial conduct, referencing the Belmont Report and equipoise</li><li>Advances in computation and Markov Chain Monte Carlo (MCMC)</li><li>Regulatory frameworks for Bayesian adaptive trials, including FDA guidance</li><li>Implementation details from I-SPY 2 and REMAP-CAP platform trials</li><li>Bayesian methodology in the context of artificial intelligence, precision medicine, and future data integration</li></ul><p>For more, visit us at <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of "In the Interim…" guest host Cooper Berry moderates a detailed discussion on the evolution and practice of Bayesian methodology in clinical trials with fellow family members Dr. Don Berry, Dr. Scott Berry, Dr. Lindsay Berry, and Dr. Nick Berry. The panel outlines the foundational principles of Bayesian decision-making in medical research, ethical debates informed by historical reports like the Belmont Report, and the shift in regulatory acceptance. Computational developments such as Markov Chain Monte Carlo (MCMC) are examined for their role in enabling applied Bayesian models. Panelists give practical accounts of implementing adaptive and platform trials, including I-SPY 2 and REMAP-CAP, and analyze challenges faced during the COVID-19 pandemic. The implications of Bayesian statistics in artificial intelligence and contemporary clinical decision-making are explored, highlighting ongoing shifts in trial design and evidence synthesis. Each discussion is grounded in direct experience and technical rigor, providing insight into both the operational realities and future trajectory of Bayesian-driven methods in clinical research.</p><p><strong>Key Highlights:</strong></p><ul><li>Historical development of Bayesian clinical trial design and foundational influence from Leonard J. Savage to current methods</li><li>Ethical tension in trial conduct, referencing the Belmont Report and equipoise</li><li>Advances in computation and Markov Chain Monte Carlo (MCMC)</li><li>Regulatory frameworks for Bayesian adaptive trials, including FDA guidance</li><li>Implementation details from I-SPY 2 and REMAP-CAP platform trials</li><li>Bayesian methodology in the context of artificial intelligence, precision medicine, and future data integration</li></ul><p>For more, visit us at <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </content:encoded>
      <pubDate>Mon, 24 Nov 2025 06:00:00 -0600</pubDate>
      <author>Berry</author>
      <enclosure url="https://media.transistor.fm/91f4d6e5/604cbadb.mp3" length="64629876" type="audio/mpeg"/>
      <itunes:author>Berry</itunes:author>
      <itunes:duration>4037</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode of "In the Interim…" guest host Cooper Berry moderates a detailed discussion on the evolution and practice of Bayesian methodology in clinical trials with fellow family members Dr. Don Berry, Dr. Scott Berry, Dr. Lindsay Berry, and Dr. Nick Berry. The panel outlines the foundational principles of Bayesian decision-making in medical research, ethical debates informed by historical reports like the Belmont Report, and the shift in regulatory acceptance. Computational developments such as Markov Chain Monte Carlo (MCMC) are examined for their role in enabling applied Bayesian models. Panelists give practical accounts of implementing adaptive and platform trials, including I-SPY 2 and REMAP-CAP, and analyze challenges faced during the COVID-19 pandemic. The implications of Bayesian statistics in artificial intelligence and contemporary clinical decision-making are explored, highlighting ongoing shifts in trial design and evidence synthesis. Each discussion is grounded in direct experience and technical rigor, providing insight into both the operational realities and future trajectory of Bayesian-driven methods in clinical research.</p><p><strong>Key Highlights:</strong></p><ul><li>Historical development of Bayesian clinical trial design and foundational influence from Leonard J. Savage to current methods</li><li>Ethical tension in trial conduct, referencing the Belmont Report and equipoise</li><li>Advances in computation and Markov Chain Monte Carlo (MCMC)</li><li>Regulatory frameworks for Bayesian adaptive trials, including FDA guidance</li><li>Implementation details from I-SPY 2 and REMAP-CAP platform trials</li><li>Bayesian methodology in the context of artificial intelligence, precision medicine, and future data integration</li></ul><p>For more, visit us at <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </itunes:summary>
      <itunes:keywords>statistical science, clinical trials</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.berryconsultants.com/" img="https://img.transistorcdn.com/D3ZU3jufor08z5PmBAIexD7RKnvumbHpcogC2R-EUbg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82ZmRl/OTVkMzNmNTNkYjQz/MjI2MmZjMzk0Y2I0/NmE2MC5wbmc.jpg">Scott Berry</podcast:person>
      <podcast:person role="Guest" href="https://www.berryconsultants.com/" img="https://img.transistorcdn.com/sMT7m6cLBxpBe68Y93f4thTn4HeRQul45USdMF7yR40/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82ZTBl/ZjBlMmFkZjU3NjYx/OTI0MmYzY2E0NWQ0/OTIyMC5wbmc.jpg">Don Berry</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/91f4d6e5/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/91f4d6e5/transcript.json" type="application/json"/>
    </item>
    <item>
      <title>A Visit with Stroke Neurologist Dr. Jeff Saver</title>
      <itunes:episode>37</itunes:episode>
      <podcast:episode>37</podcast:episode>
      <itunes:title>A Visit with Stroke Neurologist Dr. Jeff Saver</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">a12d3d7d-57e4-4f5a-a33a-2d6ca06d6010</guid>
      <link>https://share.transistor.fm/s/c9ee28cb</link>
      <description>
        <![CDATA[<p>In episode 37 of "In the Interim…", Dr. Jeff Saver, Director of the UCLA Comprehensive Stroke and Vascular Neurology Program, details his shift from behavioral neurology to clinical stroke research after early engagement with multicenter trials like TOAST. The discussion covers the biology of acute ischemic stroke, quantifying neuronal loss, and the scientific underpinnings of “time is brain.” Dr. Saver outlines the evolution of endovascular therapy, from early device challenges to current reperfusion success rates exceeding 85%. Key methodological issues in stroke trial analyses are presented, including debate over endpoint selection—dichotomous versus ordinal approaches and the limitations therein. Special focus is placed on the utility-weighted modified Rankin Scale, which assigns empirically derived, patient-centered health values to each disability state, providing a comprehensive measure that captures both benefit and harm. The episode explores regulatory hesitancy, differing analytic preferences within the field, and the design prospects for neuroprotectant interventions. Heterogeneity in patient outcomes and implications for public health and trial methodology are addressed. The episode provides an empirical account of clinical trial endpoint selection, interpretation, and future directions in cerebrovascular research.</p><p><strong>Key Highlights</strong></p><ul><li>Early career influences and pivotal trial participation.</li><li>Pathophysiology and quantification of acute stroke injury.</li><li>Endovascular device development and clinical impact.</li><li>Comparative analysis of endpoint methods: dichotomous, ordinal, and utility-weighted approaches.</li><li>Technical derivation and application of utility-weighted mRS.</li><li>Ongoing regulatory and methodological debate.</li><li>Heterogeneity in ischemic vulnerability and future trial directions.</li></ul><p>For more, visit us at <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In episode 37 of "In the Interim…", Dr. Jeff Saver, Director of the UCLA Comprehensive Stroke and Vascular Neurology Program, details his shift from behavioral neurology to clinical stroke research after early engagement with multicenter trials like TOAST. The discussion covers the biology of acute ischemic stroke, quantifying neuronal loss, and the scientific underpinnings of “time is brain.” Dr. Saver outlines the evolution of endovascular therapy, from early device challenges to current reperfusion success rates exceeding 85%. Key methodological issues in stroke trial analyses are presented, including debate over endpoint selection—dichotomous versus ordinal approaches and the limitations therein. Special focus is placed on the utility-weighted modified Rankin Scale, which assigns empirically derived, patient-centered health values to each disability state, providing a comprehensive measure that captures both benefit and harm. The episode explores regulatory hesitancy, differing analytic preferences within the field, and the design prospects for neuroprotectant interventions. Heterogeneity in patient outcomes and implications for public health and trial methodology are addressed. The episode provides an empirical account of clinical trial endpoint selection, interpretation, and future directions in cerebrovascular research.</p><p><strong>Key Highlights</strong></p><ul><li>Early career influences and pivotal trial participation.</li><li>Pathophysiology and quantification of acute stroke injury.</li><li>Endovascular device development and clinical impact.</li><li>Comparative analysis of endpoint methods: dichotomous, ordinal, and utility-weighted approaches.</li><li>Technical derivation and application of utility-weighted mRS.</li><li>Ongoing regulatory and methodological debate.</li><li>Heterogeneity in ischemic vulnerability and future trial directions.</li></ul><p>For more, visit us at <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </content:encoded>
      <pubDate>Mon, 17 Nov 2025 06:00:00 -0600</pubDate>
      <author>Berry</author>
      <enclosure url="https://media.transistor.fm/c9ee28cb/14276f47.mp3" length="35547835" type="audio/mpeg"/>
      <itunes:author>Berry</itunes:author>
      <itunes:duration>2219</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In episode 37 of "In the Interim…", Dr. Jeff Saver, Director of the UCLA Comprehensive Stroke and Vascular Neurology Program, details his shift from behavioral neurology to clinical stroke research after early engagement with multicenter trials like TOAST. The discussion covers the biology of acute ischemic stroke, quantifying neuronal loss, and the scientific underpinnings of “time is brain.” Dr. Saver outlines the evolution of endovascular therapy, from early device challenges to current reperfusion success rates exceeding 85%. Key methodological issues in stroke trial analyses are presented, including debate over endpoint selection—dichotomous versus ordinal approaches and the limitations therein. Special focus is placed on the utility-weighted modified Rankin Scale, which assigns empirically derived, patient-centered health values to each disability state, providing a comprehensive measure that captures both benefit and harm. The episode explores regulatory hesitancy, differing analytic preferences within the field, and the design prospects for neuroprotectant interventions. Heterogeneity in patient outcomes and implications for public health and trial methodology are addressed. The episode provides an empirical account of clinical trial endpoint selection, interpretation, and future directions in cerebrovascular research.</p><p><strong>Key Highlights</strong></p><ul><li>Early career influences and pivotal trial participation.</li><li>Pathophysiology and quantification of acute stroke injury.</li><li>Endovascular device development and clinical impact.</li><li>Comparative analysis of endpoint methods: dichotomous, ordinal, and utility-weighted approaches.</li><li>Technical derivation and application of utility-weighted mRS.</li><li>Ongoing regulatory and methodological debate.</li><li>Heterogeneity in ischemic vulnerability and future trial directions.</li></ul><p>For more, visit us at <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </itunes:summary>
      <itunes:keywords>statistical science, clinical trials</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.berryconsultants.com/" img="https://img.transistorcdn.com/D3ZU3jufor08z5PmBAIexD7RKnvumbHpcogC2R-EUbg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82ZmRl/OTVkMzNmNTNkYjQz/MjI2MmZjMzk0Y2I0/NmE2MC5wbmc.jpg">Scott Berry</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/c9ee28cb/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/c9ee28cb/transcript.json" type="application/json"/>
    </item>
    <item>
      <title>The Saga of the Lecanemab Adaptive Phase II Trial</title>
      <itunes:episode>36</itunes:episode>
      <podcast:episode>36</podcast:episode>
      <itunes:title>The Saga of the Lecanemab Adaptive Phase II Trial</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">0ca1f04b-d0b2-43f5-92d0-7b3270872026</guid>
      <link>https://share.transistor.fm/s/2d87b022</link>
      <description>
        <![CDATA[<p>In Episode 36 of "In the Interim…", Dr. Scott Berry and Dr. Don Berry analyze the Phase II trial of Lecanemab (BAN2401) in Alzheimer’s disease, focusing on the application of adaptive Bayesian methods following persistent failures in Alzheimer’s drug development. The conversation covers the specific design features of five active arms, response adaptive randomization, and a longitudinal Bayesian model driving interim decisions, as well as direct operational and statistical challenges encountered during the trial. The hosts address regulatory proceedings, critique from "experts" regarding adaptive methods on noisy cognitive endpoints, and the direct alignment of the trial’s Bayesian 18-month efficacy estimates with the subsequent Phase III results and regulatory approvals.</p><p><strong>Key Highlights</strong></p><ul><li>Alzheimer’s drug development context: Widespread Phase III failures prompted a retreat from conventional trial designs and a demand for greater rigor and adaptability.</li><li>Lecanemab Phase II methodology: Five active arms, two dosing schedules, response adaptive randomization, and adaptive interim analyses at every 50 patients enabled real-time adjustment and efficient dose evaluation.</li><li>Bayesian modeling and imputation: Use of a longitudinal model to address missing data, forecast 12- and 18-month outcomes, and inform both allocation and stopping criteria.</li><li>Operational adaptations: The design accommodated unplanned safety restrictions, such as stratified randomization for APOE4-positive participants after ARIA signals.</li><li>Expert skepticism: Addressed Paul Aisen’s concerns about adapting to noisy interim cognitive data, emphasizing safeguards against erroneous stopping or success.</li><li>Regulatory outcome: The 18-month efficacy estimates from Bayesian modeling during Phase II matched Phase III findings; FDA granted accelerated approval based on amyloid reduction and later full approval after Phase III confirmation.</li></ul><p>For more, visit us at <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In Episode 36 of "In the Interim…", Dr. Scott Berry and Dr. Don Berry analyze the Phase II trial of Lecanemab (BAN2401) in Alzheimer’s disease, focusing on the application of adaptive Bayesian methods following persistent failures in Alzheimer’s drug development. The conversation covers the specific design features of five active arms, response adaptive randomization, and a longitudinal Bayesian model driving interim decisions, as well as direct operational and statistical challenges encountered during the trial. The hosts address regulatory proceedings, critique from "experts" regarding adaptive methods on noisy cognitive endpoints, and the direct alignment of the trial’s Bayesian 18-month efficacy estimates with the subsequent Phase III results and regulatory approvals.</p><p><strong>Key Highlights</strong></p><ul><li>Alzheimer’s drug development context: Widespread Phase III failures prompted a retreat from conventional trial designs and a demand for greater rigor and adaptability.</li><li>Lecanemab Phase II methodology: Five active arms, two dosing schedules, response adaptive randomization, and adaptive interim analyses at every 50 patients enabled real-time adjustment and efficient dose evaluation.</li><li>Bayesian modeling and imputation: Use of a longitudinal model to address missing data, forecast 12- and 18-month outcomes, and inform both allocation and stopping criteria.</li><li>Operational adaptations: The design accommodated unplanned safety restrictions, such as stratified randomization for APOE4-positive participants after ARIA signals.</li><li>Expert skepticism: Addressed Paul Aisen’s concerns about adapting to noisy interim cognitive data, emphasizing safeguards against erroneous stopping or success.</li><li>Regulatory outcome: The 18-month efficacy estimates from Bayesian modeling during Phase II matched Phase III findings; FDA granted accelerated approval based on amyloid reduction and later full approval after Phase III confirmation.</li></ul><p>For more, visit us at <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </content:encoded>
      <pubDate>Mon, 10 Nov 2025 06:00:00 -0600</pubDate>
      <author>Berry</author>
      <enclosure url="https://media.transistor.fm/2d87b022/91061115.mp3" length="49731283" type="audio/mpeg"/>
      <itunes:author>Berry</itunes:author>
      <itunes:duration>3106</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In Episode 36 of "In the Interim…", Dr. Scott Berry and Dr. Don Berry analyze the Phase II trial of Lecanemab (BAN2401) in Alzheimer’s disease, focusing on the application of adaptive Bayesian methods following persistent failures in Alzheimer’s drug development. The conversation covers the specific design features of five active arms, response adaptive randomization, and a longitudinal Bayesian model driving interim decisions, as well as direct operational and statistical challenges encountered during the trial. The hosts address regulatory proceedings, critique from "experts" regarding adaptive methods on noisy cognitive endpoints, and the direct alignment of the trial’s Bayesian 18-month efficacy estimates with the subsequent Phase III results and regulatory approvals.</p><p><strong>Key Highlights</strong></p><ul><li>Alzheimer’s drug development context: Widespread Phase III failures prompted a retreat from conventional trial designs and a demand for greater rigor and adaptability.</li><li>Lecanemab Phase II methodology: Five active arms, two dosing schedules, response adaptive randomization, and adaptive interim analyses at every 50 patients enabled real-time adjustment and efficient dose evaluation.</li><li>Bayesian modeling and imputation: Use of a longitudinal model to address missing data, forecast 12- and 18-month outcomes, and inform both allocation and stopping criteria.</li><li>Operational adaptations: The design accommodated unplanned safety restrictions, such as stratified randomization for APOE4-positive participants after ARIA signals.</li><li>Expert skepticism: Addressed Paul Aisen’s concerns about adapting to noisy interim cognitive data, emphasizing safeguards against erroneous stopping or success.</li><li>Regulatory outcome: The 18-month efficacy estimates from Bayesian modeling during Phase II matched Phase III findings; FDA granted accelerated approval based on amyloid reduction and later full approval after Phase III confirmation.</li></ul><p>For more, visit us at <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </itunes:summary>
      <itunes:keywords>statistical science, clinical trials</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Guest" href="https://www.berryconsultants.com/" img="https://img.transistorcdn.com/sMT7m6cLBxpBe68Y93f4thTn4HeRQul45USdMF7yR40/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82ZTBl/ZjBlMmFkZjU3NjYx/OTI0MmYzY2E0NWQ0/OTIyMC5wbmc.jpg">Don Berry</podcast:person>
      <podcast:person role="Host" href="https://www.berryconsultants.com/" img="https://img.transistorcdn.com/D3ZU3jufor08z5PmBAIexD7RKnvumbHpcogC2R-EUbg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82ZmRl/OTVkMzNmNTNkYjQz/MjI2MmZjMzk0Y2I0/NmE2MC5wbmc.jpg">Scott Berry</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/2d87b022/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/2d87b022/transcript.json" type="application/json"/>
    </item>
    <item>
      <title>Teaching Statistics and Data Science through Sports with Dr. Jim Albert</title>
      <itunes:episode>35</itunes:episode>
      <podcast:episode>35</podcast:episode>
      <itunes:title>Teaching Statistics and Data Science through Sports with Dr. Jim Albert</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">3807d4dd-7e3b-40d3-aa32-d55ed8e4fa3b</guid>
      <link>https://share.transistor.fm/s/ee4f9ef8</link>
      <description>
        <![CDATA[<p>On this episode of “In the Interim…”, which is co-sponsored by the Journal of Statistics and Data Science Education, Dr. Scott Berry talks with Dr. Jim Albert, Professor Emeritus at Bowling Green State University, whose extensive work encompasses Bayesian statistics and computation, sports analytics, and decades of exemplary teaching. Dr. Albert shares insights on integrating sports into statistics education and discusses his transition from academic roots to consulting for the Houston Astros. This episode highlights the evolution of sports statistics—from manual data collection to sophisticated analytics—and critiques traditional metrics in favor of advanced systems. The dialogue explores career opportunities in sports statistics as well as the need for open research avenues in sports analytics, facilitating broader access and distribution of statistical insights.</p><p><strong>Key Highlights</strong></p><ul><li>Use of sports to contextualize statistical concepts, providing practical illustrations over abstract textbook issues</li><li>Exposing misconceptions about randomness, streakiness, and “clutch ability” perpetuated by both public myths and sports simulations</li><li>Analytical evolution from traditional metrics like batting average to advanced assessments like OPS and on-base percentage</li><li>Regression-to-the-mean explained with sports scenarios and its analogous application in clinical trial progression</li><li>Challenges in adopting a unified approach to teaching statistics given students’ diverse cultural and sports familiarity</li><li>Barriers in publishing sports analytics research, prompting initiatives for accessible, open publications</li></ul><p>For more, visit: <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>On this episode of “In the Interim…”, which is co-sponsored by the Journal of Statistics and Data Science Education, Dr. Scott Berry talks with Dr. Jim Albert, Professor Emeritus at Bowling Green State University, whose extensive work encompasses Bayesian statistics and computation, sports analytics, and decades of exemplary teaching. Dr. Albert shares insights on integrating sports into statistics education and discusses his transition from academic roots to consulting for the Houston Astros. This episode highlights the evolution of sports statistics—from manual data collection to sophisticated analytics—and critiques traditional metrics in favor of advanced systems. The dialogue explores career opportunities in sports statistics as well as the need for open research avenues in sports analytics, facilitating broader access and distribution of statistical insights.</p><p><strong>Key Highlights</strong></p><ul><li>Use of sports to contextualize statistical concepts, providing practical illustrations over abstract textbook issues</li><li>Exposing misconceptions about randomness, streakiness, and “clutch ability” perpetuated by both public myths and sports simulations</li><li>Analytical evolution from traditional metrics like batting average to advanced assessments like OPS and on-base percentage</li><li>Regression-to-the-mean explained with sports scenarios and its analogous application in clinical trial progression</li><li>Challenges in adopting a unified approach to teaching statistics given students’ diverse cultural and sports familiarity</li><li>Barriers in publishing sports analytics research, prompting initiatives for accessible, open publications</li></ul><p>For more, visit: <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </content:encoded>
      <pubDate>Mon, 03 Nov 2025 06:00:00 -0600</pubDate>
      <author>Berry</author>
      <enclosure url="https://media.transistor.fm/ee4f9ef8/09fd0f91.mp3" length="36688962" type="audio/mpeg"/>
      <itunes:author>Berry</itunes:author>
      <itunes:duration>2291</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>On this episode of “In the Interim…”, which is co-sponsored by the Journal of Statistics and Data Science Education, Dr. Scott Berry talks with Dr. Jim Albert, Professor Emeritus at Bowling Green State University, whose extensive work encompasses Bayesian statistics and computation, sports analytics, and decades of exemplary teaching. Dr. Albert shares insights on integrating sports into statistics education and discusses his transition from academic roots to consulting for the Houston Astros. This episode highlights the evolution of sports statistics—from manual data collection to sophisticated analytics—and critiques traditional metrics in favor of advanced systems. The dialogue explores career opportunities in sports statistics as well as the need for open research avenues in sports analytics, facilitating broader access and distribution of statistical insights.</p><p><strong>Key Highlights</strong></p><ul><li>Use of sports to contextualize statistical concepts, providing practical illustrations over abstract textbook issues</li><li>Exposing misconceptions about randomness, streakiness, and “clutch ability” perpetuated by both public myths and sports simulations</li><li>Analytical evolution from traditional metrics like batting average to advanced assessments like OPS and on-base percentage</li><li>Regression-to-the-mean explained with sports scenarios and its analogous application in clinical trial progression</li><li>Challenges in adopting a unified approach to teaching statistics given students’ diverse cultural and sports familiarity</li><li>Barriers in publishing sports analytics research, prompting initiatives for accessible, open publications</li></ul><p>For more, visit: <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </itunes:summary>
      <itunes:keywords>statistical science, clinical trials</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/ee4f9ef8/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/ee4f9ef8/transcript.json" type="application/json"/>
    </item>
    <item>
      <title>Digital Googols</title>
      <itunes:episode>34</itunes:episode>
      <podcast:episode>34</podcast:episode>
      <itunes:title>Digital Googols</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">540006e6-e8cd-46c8-b821-38aafecd8b9a</guid>
      <link>https://share.transistor.fm/s/c6b67df8</link>
      <description>
        <![CDATA[<p>In this episode of "In the Interim…", Dr. Scott Berry examines the concept of “digital twins” in clinical trials. He details how simulation of clinical trials is a direct analog of digital twin methodology, allowing for the in-silico modeling of the physical trial conduct, enrollment, dropouts, and patient outcomes under varied assumptions. Scott discusses model-based patient prediction and highlights scenarios where prediction of counterfactual outcomes can increase efficiency, particularly in rare disease or limited-data settings. He provides a systematic comparison of Unlearn’s PROCOVA neural network approach with traditional covariate adjustment, noting that proprietary models must demonstrate clear improvement over standard methods, which is unlikely. There is great potential in the simulation of many digital twins for a patient as a potential augmentation or substitute for controls. </p><p><strong>Key Highlights</strong></p><ul><li>Defines digital twins using NASA history and Wikipedia.</li><li>Describes clinical trial simulation as a digital twin methodology.</li><li>Examines patient-level model-based prediction and covariate adjustment.</li><li>Compares Unlearn’s PROCOVA with traditional approaches.</li><li>Highlights transparency and reproducibility concerns with proprietary algorithms.</li><li>Asserts that future trial efficiency demands integration of predictive modeling with randomization and large external datasets.</li></ul><p>For more, visit: <a href="https://www.youtube.com/redirect?event=video_description&amp;redir_token=QUFFLUhqa3MzSmlKMzM0Z1ZHc2NBM2lhZHZ0V3gzZUd6Z3xBQ3Jtc0ttTmtkM3pRVW14blZwT3lmVEFYMlpIbjFHcmRXVXczZkZSSDR4NUJHekVRNldETGp4R3E1aEJ4OFpnZnU4OXViMXFmeUMwSDczb2JzTHdCUldUNG5weG1MZ0VPYl9QcWRfRkRLcnd5R1BuYWxFeHpqTQ&amp;q=https%3A%2F%2Fwww.berryconsultants.com%2F&amp;v=GjCytc6fueo">https://www.berryconsultants.com/</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of "In the Interim…", Dr. Scott Berry examines the concept of “digital twins” in clinical trials. He details how simulation of clinical trials is a direct analog of digital twin methodology, allowing for the in-silico modeling of the physical trial conduct, enrollment, dropouts, and patient outcomes under varied assumptions. Scott discusses model-based patient prediction and highlights scenarios where prediction of counterfactual outcomes can increase efficiency, particularly in rare disease or limited-data settings. He provides a systematic comparison of Unlearn’s PROCOVA neural network approach with traditional covariate adjustment, noting that proprietary models must demonstrate clear improvement over standard methods, which is unlikely. There is great potential in the simulation of many digital twins for a patient as a potential augmentation or substitute for controls. </p><p><strong>Key Highlights</strong></p><ul><li>Defines digital twins using NASA history and Wikipedia.</li><li>Describes clinical trial simulation as a digital twin methodology.</li><li>Examines patient-level model-based prediction and covariate adjustment.</li><li>Compares Unlearn’s PROCOVA with traditional approaches.</li><li>Highlights transparency and reproducibility concerns with proprietary algorithms.</li><li>Asserts that future trial efficiency demands integration of predictive modeling with randomization and large external datasets.</li></ul><p>For more, visit: <a href="https://www.youtube.com/redirect?event=video_description&amp;redir_token=QUFFLUhqa3MzSmlKMzM0Z1ZHc2NBM2lhZHZ0V3gzZUd6Z3xBQ3Jtc0ttTmtkM3pRVW14blZwT3lmVEFYMlpIbjFHcmRXVXczZkZSSDR4NUJHekVRNldETGp4R3E1aEJ4OFpnZnU4OXViMXFmeUMwSDczb2JzTHdCUldUNG5weG1MZ0VPYl9QcWRfRkRLcnd5R1BuYWxFeHpqTQ&amp;q=https%3A%2F%2Fwww.berryconsultants.com%2F&amp;v=GjCytc6fueo">https://www.berryconsultants.com/</a></p>]]>
      </content:encoded>
      <pubDate>Mon, 27 Oct 2025 06:00:00 -0500</pubDate>
      <author>Berry</author>
      <enclosure url="https://media.transistor.fm/c6b67df8/ae596706.mp3" length="36740660" type="audio/mpeg"/>
      <itunes:author>Berry</itunes:author>
      <itunes:duration>2294</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode of "In the Interim…", Dr. Scott Berry examines the concept of “digital twins” in clinical trials. He details how simulation of clinical trials is a direct analog of digital twin methodology, allowing for the in-silico modeling of the physical trial conduct, enrollment, dropouts, and patient outcomes under varied assumptions. Scott discusses model-based patient prediction and highlights scenarios where prediction of counterfactual outcomes can increase efficiency, particularly in rare disease or limited-data settings. He provides a systematic comparison of Unlearn’s PROCOVA neural network approach with traditional covariate adjustment, noting that proprietary models must demonstrate clear improvement over standard methods, which is unlikely. There is great potential in the simulation of many digital twins for a patient as a potential augmentation or substitute for controls. </p><p><strong>Key Highlights</strong></p><ul><li>Defines digital twins using NASA history and Wikipedia.</li><li>Describes clinical trial simulation as a digital twin methodology.</li><li>Examines patient-level model-based prediction and covariate adjustment.</li><li>Compares Unlearn’s PROCOVA with traditional approaches.</li><li>Highlights transparency and reproducibility concerns with proprietary algorithms.</li><li>Asserts that future trial efficiency demands integration of predictive modeling with randomization and large external datasets.</li></ul><p>For more, visit: <a href="https://www.youtube.com/redirect?event=video_description&amp;redir_token=QUFFLUhqa3MzSmlKMzM0Z1ZHc2NBM2lhZHZ0V3gzZUd6Z3xBQ3Jtc0ttTmtkM3pRVW14blZwT3lmVEFYMlpIbjFHcmRXVXczZkZSSDR4NUJHekVRNldETGp4R3E1aEJ4OFpnZnU4OXViMXFmeUMwSDczb2JzTHdCUldUNG5weG1MZ0VPYl9QcWRfRkRLcnd5R1BuYWxFeHpqTQ&amp;q=https%3A%2F%2Fwww.berryconsultants.com%2F&amp;v=GjCytc6fueo">https://www.berryconsultants.com/</a></p>]]>
      </itunes:summary>
      <itunes:keywords>statistical science, clinical trials</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.berryconsultants.com/" img="https://img.transistorcdn.com/D3ZU3jufor08z5PmBAIexD7RKnvumbHpcogC2R-EUbg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82ZmRl/OTVkMzNmNTNkYjQz/MjI2MmZjMzk0Y2I0/NmE2MC5wbmc.jpg">Scott Berry</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/c6b67df8/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/c6b67df8/transcript.json" type="application/json"/>
    </item>
    <item>
      <title>A Visit with Andrew Thomson</title>
      <itunes:episode>33</itunes:episode>
      <podcast:episode>33</podcast:episode>
      <itunes:title>A Visit with Andrew Thomson</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">5ad645b1-401c-4284-8d99-30474899d054</guid>
      <link>https://share.transistor.fm/s/ec0938db</link>
      <description>
        <![CDATA[<p>In this episode of "In the Interim…", Dr. Scott Berry interviews Dr. Andrew Thomson, owner and lead consultant of Regnitio. Thomson discusses his academic progression from mathematics at Cambridge to a Master’s at Southampton and advanced study with Prof. Sylvia Richardson at Imperial College, followed by doctoral work in cluster randomized trials at the London School of Hygiene and Tropical Medicine. He recounts the realities of regulatory roles, including contemplative study of data, working within multidisciplinary teams, and delivering regulatory assessments to senior committees. The episode contrasts EMA’s collaborative cross-country structure against the more centralized FDA process and explores methodological challenges faced by both. Scott and Andrew discuss regulatory expectations for interim analyses, the definition and metrics of trial complexity, and differing approaches to Type I error control across agencies. The conversation also covers the rapid adoption and adaptation of platform trials during COVID-19, and the impact on trial evaluation frameworks. Concluding, Thomson explains the motivation for launching Regnitio, emphasizing how regulatory perspective and multidisciplinary insight can support informed decision-making throughout clinical development.</p><p><strong>Key Highlights</strong></p><ul><li>Academic and professional pathway: Cambridge, Southampton, Imperial College, London School of Hygiene and Tropical Medicine</li><li>Roles as a statistical assessor: analysis, collaborative review, expert panel presentations</li><li>EMA vs. FDA: consensus-driven versus centralized approaches, harmonization challenges</li><li>Trial complexity, Interim analyses, and diversity in regulatory interpretations</li><li>Adoption and practicalities of platform trials during the COVID-19 response</li><li>Consulting goals: integrating regulatory perspective and broad expertise for drug development decisions</li></ul><p>For more, visit: <a href="https://www.youtube.com/redirect?event=video_description&amp;redir_token=QUFFLUhqbWxnZ1Q4ckwtY0xYXzR2ZVhJLXAycVBlWFB2Z3xBQ3Jtc0tsMGNxeTJsNHgtZWlfclFpTGphWHRIZjVlLWZUZXRvTjZYOF9hWTZseEttTl9UbXdBNU10ckE0cVowWjIwRUxFT1VXMHUyNnlFY3Y2MUtHRkZMT0dVUmhrNzBlY1UzRWFCcm9rN0JVb1JtR2N3OENtaw&amp;q=https%3A%2F%2Fwww.berryconsultants.com%2F&amp;v=7eGtekMeERM">https://www.berryconsultants.com/</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of "In the Interim…", Dr. Scott Berry interviews Dr. Andrew Thomson, owner and lead consultant of Regnitio. Thomson discusses his academic progression from mathematics at Cambridge to a Master’s at Southampton and advanced study with Prof. Sylvia Richardson at Imperial College, followed by doctoral work in cluster randomized trials at the London School of Hygiene and Tropical Medicine. He recounts the realities of regulatory roles, including contemplative study of data, working within multidisciplinary teams, and delivering regulatory assessments to senior committees. The episode contrasts EMA’s collaborative cross-country structure against the more centralized FDA process and explores methodological challenges faced by both. Scott and Andrew discuss regulatory expectations for interim analyses, the definition and metrics of trial complexity, and differing approaches to Type I error control across agencies. The conversation also covers the rapid adoption and adaptation of platform trials during COVID-19, and the impact on trial evaluation frameworks. Concluding, Thomson explains the motivation for launching Regnitio, emphasizing how regulatory perspective and multidisciplinary insight can support informed decision-making throughout clinical development.</p><p><strong>Key Highlights</strong></p><ul><li>Academic and professional pathway: Cambridge, Southampton, Imperial College, London School of Hygiene and Tropical Medicine</li><li>Roles as a statistical assessor: analysis, collaborative review, expert panel presentations</li><li>EMA vs. FDA: consensus-driven versus centralized approaches, harmonization challenges</li><li>Trial complexity, Interim analyses, and diversity in regulatory interpretations</li><li>Adoption and practicalities of platform trials during the COVID-19 response</li><li>Consulting goals: integrating regulatory perspective and broad expertise for drug development decisions</li></ul><p>For more, visit: <a href="https://www.youtube.com/redirect?event=video_description&amp;redir_token=QUFFLUhqbWxnZ1Q4ckwtY0xYXzR2ZVhJLXAycVBlWFB2Z3xBQ3Jtc0tsMGNxeTJsNHgtZWlfclFpTGphWHRIZjVlLWZUZXRvTjZYOF9hWTZseEttTl9UbXdBNU10ckE0cVowWjIwRUxFT1VXMHUyNnlFY3Y2MUtHRkZMT0dVUmhrNzBlY1UzRWFCcm9rN0JVb1JtR2N3OENtaw&amp;q=https%3A%2F%2Fwww.berryconsultants.com%2F&amp;v=7eGtekMeERM">https://www.berryconsultants.com/</a></p>]]>
      </content:encoded>
      <pubDate>Mon, 20 Oct 2025 06:00:00 -0500</pubDate>
      <author>Berry</author>
      <enclosure url="https://media.transistor.fm/ec0938db/9ea64f87.mp3" length="41902886" type="audio/mpeg"/>
      <itunes:author>Berry</itunes:author>
      <itunes:duration>2617</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode of "In the Interim…", Dr. Scott Berry interviews Dr. Andrew Thomson, owner and lead consultant of Regnitio. Thomson discusses his academic progression from mathematics at Cambridge to a Master’s at Southampton and advanced study with Prof. Sylvia Richardson at Imperial College, followed by doctoral work in cluster randomized trials at the London School of Hygiene and Tropical Medicine. He recounts the realities of regulatory roles, including contemplative study of data, working within multidisciplinary teams, and delivering regulatory assessments to senior committees. The episode contrasts EMA’s collaborative cross-country structure against the more centralized FDA process and explores methodological challenges faced by both. Scott and Andrew discuss regulatory expectations for interim analyses, the definition and metrics of trial complexity, and differing approaches to Type I error control across agencies. The conversation also covers the rapid adoption and adaptation of platform trials during COVID-19, and the impact on trial evaluation frameworks. Concluding, Thomson explains the motivation for launching Regnitio, emphasizing how regulatory perspective and multidisciplinary insight can support informed decision-making throughout clinical development.</p><p><strong>Key Highlights</strong></p><ul><li>Academic and professional pathway: Cambridge, Southampton, Imperial College, London School of Hygiene and Tropical Medicine</li><li>Roles as a statistical assessor: analysis, collaborative review, expert panel presentations</li><li>EMA vs. FDA: consensus-driven versus centralized approaches, harmonization challenges</li><li>Trial complexity, Interim analyses, and diversity in regulatory interpretations</li><li>Adoption and practicalities of platform trials during the COVID-19 response</li><li>Consulting goals: integrating regulatory perspective and broad expertise for drug development decisions</li></ul><p>For more, visit: <a href="https://www.youtube.com/redirect?event=video_description&amp;redir_token=QUFFLUhqbWxnZ1Q4ckwtY0xYXzR2ZVhJLXAycVBlWFB2Z3xBQ3Jtc0tsMGNxeTJsNHgtZWlfclFpTGphWHRIZjVlLWZUZXRvTjZYOF9hWTZseEttTl9UbXdBNU10ckE0cVowWjIwRUxFT1VXMHUyNnlFY3Y2MUtHRkZMT0dVUmhrNzBlY1UzRWFCcm9rN0JVb1JtR2N3OENtaw&amp;q=https%3A%2F%2Fwww.berryconsultants.com%2F&amp;v=7eGtekMeERM">https://www.berryconsultants.com/</a></p>]]>
      </itunes:summary>
      <itunes:keywords>statistical science, clinical trials</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.berryconsultants.com/" img="https://img.transistorcdn.com/D3ZU3jufor08z5PmBAIexD7RKnvumbHpcogC2R-EUbg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82ZmRl/OTVkMzNmNTNkYjQz/MjI2MmZjMzk0Y2I0/NmE2MC5wbmc.jpg">Scott Berry</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/ec0938db/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/ec0938db/transcript.json" type="application/json"/>
    </item>
    <item>
      <title>Moving Clinical Trial Goalposts</title>
      <itunes:episode>32</itunes:episode>
      <podcast:episode>32</podcast:episode>
      <itunes:title>Moving Clinical Trial Goalposts</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">fb6902cc-0138-4ef4-9c63-7eb22087821e</guid>
      <link>https://share.transistor.fm/s/a9790d87</link>
      <description>
        <![CDATA[<p>In this episode of "In the Interim…", Dr. Scott Berry and Dr. Kert Viele analyze how regulatory, editorial, and science community standards often impose additional, inconsistent requirements for novel methods in clinical trial design, rarely applied to standard approaches. Examples from oncology, enrichment trials, platform studies, and endpoint analysis illustrate how adaptive and Bayesian designs are frequently subject to higher scrutiny, shifting metrics, or distinct evidentiary demands. The episode covers technical and regulatory issues, such as the selective application of Type 1 error controls, evolving multiplicity guidance, and challenges in ethical reasoning with adaptive allocation. Scott and Kert frame the discussion with empirical comparisons and advocate for the use of clinical trial simulation to ensure fair, metric-driven evaluation of both novel and legacy designs.</p><p><strong>Key Highlights:</strong></p><ul><li>Oncology combination therapy trial with Bayesian borrowing facing heightened regulatory caution versus single-arm historical controls.</li><li>Hierarchical versus pooled analysis in enrichment/basket trials, with focus on error definitions and subgroup effects that have always existed.</li><li>ICH E20 guidance potentially discourages use of enrichment by imposing new subgroup comparison burdens absent from standard trials.</li><li>Platform trial multiplicity rules contrasted with parallel single-arm trials; regulatory stance continues to evolve.</li><li>Ethical debate on adaptive allocation: questioning rationale behind adaptive randomizing may be ethically challenging, but fixed allocation is okay despite same interim data.</li><li>Critical review of explicit utility weighting in the DAWN trial, despite alternative methods having the same issues</li></ul><p>For more, visit: <a href="https://www.youtube.com/redirect?event=video_description&amp;redir_token=QUFFLUhqbmMtOVE4dzhSeF9ZaFNXRnVGZlhkMW5FcEJDUXxBQ3Jtc0trVVUtcFplX3lNWlFQMWhNaEs5Tk1NX2ZTWFpuYnFaTDRlN3dHSlhESFhQcGRYbGpPRHg2Y052V2pFSTNBZjNhTFF3UlBCaEQwZXJBNE9aUlVMd1V0UVJnZ1ZycVZLcnVSYjZ5MXkzcnFlQlM5WlN4cw&amp;q=https%3A%2F%2Fwww.berryconsultants.com%2F&amp;v=_t7CkBYCkdY">https://www.berryconsultants.com/</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of "In the Interim…", Dr. Scott Berry and Dr. Kert Viele analyze how regulatory, editorial, and science community standards often impose additional, inconsistent requirements for novel methods in clinical trial design, rarely applied to standard approaches. Examples from oncology, enrichment trials, platform studies, and endpoint analysis illustrate how adaptive and Bayesian designs are frequently subject to higher scrutiny, shifting metrics, or distinct evidentiary demands. The episode covers technical and regulatory issues, such as the selective application of Type 1 error controls, evolving multiplicity guidance, and challenges in ethical reasoning with adaptive allocation. Scott and Kert frame the discussion with empirical comparisons and advocate for the use of clinical trial simulation to ensure fair, metric-driven evaluation of both novel and legacy designs.</p><p><strong>Key Highlights:</strong></p><ul><li>Oncology combination therapy trial with Bayesian borrowing facing heightened regulatory caution versus single-arm historical controls.</li><li>Hierarchical versus pooled analysis in enrichment/basket trials, with focus on error definitions and subgroup effects that have always existed.</li><li>ICH E20 guidance potentially discourages use of enrichment by imposing new subgroup comparison burdens absent from standard trials.</li><li>Platform trial multiplicity rules contrasted with parallel single-arm trials; regulatory stance continues to evolve.</li><li>Ethical debate on adaptive allocation: questioning rationale behind adaptive randomizing may be ethically challenging, but fixed allocation is okay despite same interim data.</li><li>Critical review of explicit utility weighting in the DAWN trial, despite alternative methods having the same issues</li></ul><p>For more, visit: <a href="https://www.youtube.com/redirect?event=video_description&amp;redir_token=QUFFLUhqbmMtOVE4dzhSeF9ZaFNXRnVGZlhkMW5FcEJDUXxBQ3Jtc0trVVUtcFplX3lNWlFQMWhNaEs5Tk1NX2ZTWFpuYnFaTDRlN3dHSlhESFhQcGRYbGpPRHg2Y052V2pFSTNBZjNhTFF3UlBCaEQwZXJBNE9aUlVMd1V0UVJnZ1ZycVZLcnVSYjZ5MXkzcnFlQlM5WlN4cw&amp;q=https%3A%2F%2Fwww.berryconsultants.com%2F&amp;v=_t7CkBYCkdY">https://www.berryconsultants.com/</a></p>]]>
      </content:encoded>
      <pubDate>Mon, 13 Oct 2025 06:00:00 -0500</pubDate>
      <author>Berry</author>
      <enclosure url="https://media.transistor.fm/a9790d87/7929b9c6.mp3" length="35757636" type="audio/mpeg"/>
      <itunes:author>Berry</itunes:author>
      <itunes:duration>2233</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode of "In the Interim…", Dr. Scott Berry and Dr. Kert Viele analyze how regulatory, editorial, and science community standards often impose additional, inconsistent requirements for novel methods in clinical trial design, rarely applied to standard approaches. Examples from oncology, enrichment trials, platform studies, and endpoint analysis illustrate how adaptive and Bayesian designs are frequently subject to higher scrutiny, shifting metrics, or distinct evidentiary demands. The episode covers technical and regulatory issues, such as the selective application of Type 1 error controls, evolving multiplicity guidance, and challenges in ethical reasoning with adaptive allocation. Scott and Kert frame the discussion with empirical comparisons and advocate for the use of clinical trial simulation to ensure fair, metric-driven evaluation of both novel and legacy designs.</p><p><strong>Key Highlights:</strong></p><ul><li>Oncology combination therapy trial with Bayesian borrowing facing heightened regulatory caution versus single-arm historical controls.</li><li>Hierarchical versus pooled analysis in enrichment/basket trials, with focus on error definitions and subgroup effects that have always existed.</li><li>ICH E20 guidance potentially discourages use of enrichment by imposing new subgroup comparison burdens absent from standard trials.</li><li>Platform trial multiplicity rules contrasted with parallel single-arm trials; regulatory stance continues to evolve.</li><li>Ethical debate on adaptive allocation: questioning rationale behind adaptive randomizing may be ethically challenging, but fixed allocation is okay despite same interim data.</li><li>Critical review of explicit utility weighting in the DAWN trial, despite alternative methods having the same issues</li></ul><p>For more, visit: <a href="https://www.youtube.com/redirect?event=video_description&amp;redir_token=QUFFLUhqbmMtOVE4dzhSeF9ZaFNXRnVGZlhkMW5FcEJDUXxBQ3Jtc0trVVUtcFplX3lNWlFQMWhNaEs5Tk1NX2ZTWFpuYnFaTDRlN3dHSlhESFhQcGRYbGpPRHg2Y052V2pFSTNBZjNhTFF3UlBCaEQwZXJBNE9aUlVMd1V0UVJnZ1ZycVZLcnVSYjZ5MXkzcnFlQlM5WlN4cw&amp;q=https%3A%2F%2Fwww.berryconsultants.com%2F&amp;v=_t7CkBYCkdY">https://www.berryconsultants.com/</a></p>]]>
      </itunes:summary>
      <itunes:keywords>clinical trial design, adaptive trials, regulatory scrutiny, Bayesian statistics, enrichment trials, platform trials, type one error, statistical innovation</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/a9790d87/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/a9790d87/transcript.json" type="application/json"/>
    </item>
    <item>
      <title>The Not So Promising Zone Design</title>
      <itunes:episode>31</itunes:episode>
      <podcast:episode>31</podcast:episode>
      <itunes:title>The Not So Promising Zone Design</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">08c12bd4-cd8d-4155-a5bb-d330c7c0eb26</guid>
      <link>https://share.transistor.fm/s/6f733578</link>
      <description>
        <![CDATA[<p>In this episode of "In the Interim…", Dr. Scott Berry examines the mathematical foundations and efficiency claims of the promising zone design for adaptive sample size in clinical trials. Scott unpacks the conditional power thresholds that trigger sample size increases without the need to adjust alpha, as originally presented by Mehta &amp; Pocock. He systematically demonstrates, via simulation, that the promising zone rarely provides meaningful efficiency gains over fixed designs and is consistently outperformed by group sequential designs that allocate alpha across multiple analyses. Using a driving-route analogy, Scott highlights the practical flaw in making pivotal trial decisions earlier than necessary due to arbitrary statistical rules rather than observing current data. He underlines that at Berry; simulation efforts have yet to reveal a scenario where the promising zone design is more efficient than a thoughtfully constructed group sequential or Goldilocks trial. The episode urges trialists to simulate, compare, and optimize—not to accept appealing mathematical tricks without rigorous evaluation.</p><p><strong>Key Highlights</strong></p><ul><li>Explanation of the promising zone’s conditional power mechanism and alpha control.</li><li>Simulation-based comparison of power and average sample size across design types.</li><li>Direct comparison of group sequential vs. promising zone designs.</li><li>Discussion of futility rules and their impact on design choice.</li><li>Commentary on Goldilocks designs for incomplete data.</li></ul><p>For more, visit: <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of "In the Interim…", Dr. Scott Berry examines the mathematical foundations and efficiency claims of the promising zone design for adaptive sample size in clinical trials. Scott unpacks the conditional power thresholds that trigger sample size increases without the need to adjust alpha, as originally presented by Mehta &amp; Pocock. He systematically demonstrates, via simulation, that the promising zone rarely provides meaningful efficiency gains over fixed designs and is consistently outperformed by group sequential designs that allocate alpha across multiple analyses. Using a driving-route analogy, Scott highlights the practical flaw in making pivotal trial decisions earlier than necessary due to arbitrary statistical rules rather than observing current data. He underlines that at Berry; simulation efforts have yet to reveal a scenario where the promising zone design is more efficient than a thoughtfully constructed group sequential or Goldilocks trial. The episode urges trialists to simulate, compare, and optimize—not to accept appealing mathematical tricks without rigorous evaluation.</p><p><strong>Key Highlights</strong></p><ul><li>Explanation of the promising zone’s conditional power mechanism and alpha control.</li><li>Simulation-based comparison of power and average sample size across design types.</li><li>Direct comparison of group sequential vs. promising zone designs.</li><li>Discussion of futility rules and their impact on design choice.</li><li>Commentary on Goldilocks designs for incomplete data.</li></ul><p>For more, visit: <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </content:encoded>
      <pubDate>Mon, 29 Sep 2025 06:00:00 -0500</pubDate>
      <author>Berry</author>
      <enclosure url="https://media.transistor.fm/6f733578/f707d3f9.mp3" length="38099462" type="audio/mpeg"/>
      <itunes:author>Berry</itunes:author>
      <itunes:duration>2379</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode of "In the Interim…", Dr. Scott Berry examines the mathematical foundations and efficiency claims of the promising zone design for adaptive sample size in clinical trials. Scott unpacks the conditional power thresholds that trigger sample size increases without the need to adjust alpha, as originally presented by Mehta &amp; Pocock. He systematically demonstrates, via simulation, that the promising zone rarely provides meaningful efficiency gains over fixed designs and is consistently outperformed by group sequential designs that allocate alpha across multiple analyses. Using a driving-route analogy, Scott highlights the practical flaw in making pivotal trial decisions earlier than necessary due to arbitrary statistical rules rather than observing current data. He underlines that at Berry; simulation efforts have yet to reveal a scenario where the promising zone design is more efficient than a thoughtfully constructed group sequential or Goldilocks trial. The episode urges trialists to simulate, compare, and optimize—not to accept appealing mathematical tricks without rigorous evaluation.</p><p><strong>Key Highlights</strong></p><ul><li>Explanation of the promising zone’s conditional power mechanism and alpha control.</li><li>Simulation-based comparison of power and average sample size across design types.</li><li>Direct comparison of group sequential vs. promising zone designs.</li><li>Discussion of futility rules and their impact on design choice.</li><li>Commentary on Goldilocks designs for incomplete data.</li></ul><p>For more, visit: <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </itunes:summary>
      <itunes:keywords>statistical science, clinical trials</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/6f733578/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/6f733578/transcript.json" type="application/json"/>
    </item>
    <item>
      <title>A Visit with Dr. Janet Wittes</title>
      <itunes:episode>30</itunes:episode>
      <podcast:episode>30</podcast:episode>
      <itunes:title>A Visit with Dr. Janet Wittes</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">2c703320-35e2-4e91-8833-113afd5508d3</guid>
      <link>https://share.transistor.fm/s/cdbdb064</link>
      <description>
        <![CDATA[<p>Episode 30 of “In the Interim…” features Dr. Janet Wittes, Fellow of the American Statistical Association, past president of the Society of Clinical Trials, and founder of Statistics Collaborative, in discussion with Dr. Scott Berry. Dr. Wittes details her progression from Radcliffe biochemistry to Harvard statistics, shaped by targeted mentorship and her family’s insistence on advanced scientific training. She describes teaching at Hunter College, her NIH/NHLBI tenure overseeing extensive DSMB work, and the launch of Statistics Collaborative 32 years ago, building the business with her children and their peers. The episode explores her consulting on clinical trial design for orphan and neglected diseases—malaria, dengue, leishmania, ALS—and vaccine development, with technical commentary on adaptive trial methods, operational issues in low-resource contexts, and decision-making for small-sample trials. Dr. Wittes reflects on statistical leadership, ongoing DSMB involvement, and the importance of evidence-driven public health. She underscores the need for contextual and cultural awareness in trial design, illustrated by her Lilith magazine story on kosher certification and challenges in stakeholder understanding. Discussion covers career obstacles, the evolution of clinical science, vaccine advocacy, and the critical role of diversity and practical on-site knowledge in advancing statistical research.</p><p><strong>Key Highlights</strong></p><ul><li>Early academic transition from biochemistry to statistics.</li><li>Serendipitous transition from academic career at Hunter College to Branch Chief of biostatistics at NIH/NHLBI.</li><li>Founding Statistics Collaborative, business growth with children, and specialization in orphan disease trials.</li><li>Consulting expertise in adaptive design, small-sample challenges, tropical and vaccine studies.</li><li>Continued advocacy for vaccines, scientific rigor, and ethical public health practice.</li><li>Importance of representation and context in science, demonstrated by real-world consulting examples.</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Episode 30 of “In the Interim…” features Dr. Janet Wittes, Fellow of the American Statistical Association, past president of the Society of Clinical Trials, and founder of Statistics Collaborative, in discussion with Dr. Scott Berry. Dr. Wittes details her progression from Radcliffe biochemistry to Harvard statistics, shaped by targeted mentorship and her family’s insistence on advanced scientific training. She describes teaching at Hunter College, her NIH/NHLBI tenure overseeing extensive DSMB work, and the launch of Statistics Collaborative 32 years ago, building the business with her children and their peers. The episode explores her consulting on clinical trial design for orphan and neglected diseases—malaria, dengue, leishmania, ALS—and vaccine development, with technical commentary on adaptive trial methods, operational issues in low-resource contexts, and decision-making for small-sample trials. Dr. Wittes reflects on statistical leadership, ongoing DSMB involvement, and the importance of evidence-driven public health. She underscores the need for contextual and cultural awareness in trial design, illustrated by her Lilith magazine story on kosher certification and challenges in stakeholder understanding. Discussion covers career obstacles, the evolution of clinical science, vaccine advocacy, and the critical role of diversity and practical on-site knowledge in advancing statistical research.</p><p><strong>Key Highlights</strong></p><ul><li>Early academic transition from biochemistry to statistics.</li><li>Serendipitous transition from academic career at Hunter College to Branch Chief of biostatistics at NIH/NHLBI.</li><li>Founding Statistics Collaborative, business growth with children, and specialization in orphan disease trials.</li><li>Consulting expertise in adaptive design, small-sample challenges, tropical and vaccine studies.</li><li>Continued advocacy for vaccines, scientific rigor, and ethical public health practice.</li><li>Importance of representation and context in science, demonstrated by real-world consulting examples.</li></ul>]]>
      </content:encoded>
      <pubDate>Mon, 22 Sep 2025 06:17:35 -0500</pubDate>
      <author>Berry</author>
      <enclosure url="https://media.transistor.fm/cdbdb064/6488ba38.mp3" length="39231324" type="audio/mpeg"/>
      <itunes:author>Berry</itunes:author>
      <itunes:duration>2450</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Episode 30 of “In the Interim…” features Dr. Janet Wittes, Fellow of the American Statistical Association, past president of the Society of Clinical Trials, and founder of Statistics Collaborative, in discussion with Dr. Scott Berry. Dr. Wittes details her progression from Radcliffe biochemistry to Harvard statistics, shaped by targeted mentorship and her family’s insistence on advanced scientific training. She describes teaching at Hunter College, her NIH/NHLBI tenure overseeing extensive DSMB work, and the launch of Statistics Collaborative 32 years ago, building the business with her children and their peers. The episode explores her consulting on clinical trial design for orphan and neglected diseases—malaria, dengue, leishmania, ALS—and vaccine development, with technical commentary on adaptive trial methods, operational issues in low-resource contexts, and decision-making for small-sample trials. Dr. Wittes reflects on statistical leadership, ongoing DSMB involvement, and the importance of evidence-driven public health. She underscores the need for contextual and cultural awareness in trial design, illustrated by her Lilith magazine story on kosher certification and challenges in stakeholder understanding. Discussion covers career obstacles, the evolution of clinical science, vaccine advocacy, and the critical role of diversity and practical on-site knowledge in advancing statistical research.</p><p><strong>Key Highlights</strong></p><ul><li>Early academic transition from biochemistry to statistics.</li><li>Serendipitous transition from academic career at Hunter College to Branch Chief of biostatistics at NIH/NHLBI.</li><li>Founding Statistics Collaborative, business growth with children, and specialization in orphan disease trials.</li><li>Consulting expertise in adaptive design, small-sample challenges, tropical and vaccine studies.</li><li>Continued advocacy for vaccines, scientific rigor, and ethical public health practice.</li><li>Importance of representation and context in science, demonstrated by real-world consulting examples.</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>clinical trials, biostatistics, Janet Wittes, adaptive designs, orphan diseases, diversity in science, public health, medical research, statistics consulting, vaccine advocacy, women in STEM, Scott Berry</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/cdbdb064/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/cdbdb064/transcript.json" type="application/json"/>
    </item>
    <item>
      <title>Bayesian Clinical Trials with Frank Harrell</title>
      <itunes:episode>29</itunes:episode>
      <podcast:episode>29</podcast:episode>
      <itunes:title>Bayesian Clinical Trials with Frank Harrell</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">6246084e-cdae-461b-8f6b-6f078d329651</guid>
      <link>https://share.transistor.fm/s/b146894f</link>
      <description>
        <![CDATA[<p>In this episode of "In the Interim…", Dr. Scott Berry chats with Frank Harrell, a professor of Biostatistics at Vanderbilt University and W.J. Dixon Award winner. Harrell describes his transition from frequentist to Bayesian clinical trial design, prompted by a decisive meeting with Dr. Don Berry, informed by David Spiegelhalter’s published work. The dialogue addresses persistent academic opposition to Bayesian methods, operational constraints in trial implementation, regulatory work at FDA, and technical Bayesian modeling details.</p><p><strong>Key Highlights</strong></p><ul><li>Harrell credits Don Berry’s direct influence for converting him to Bayesian methods during his early career at Duke, despite entrenched academic resistance.</li><li>Discusses early cardiovascular research at Duke, experiences with large multicenter trials, and later founding Vanderbilt’s Biostatistics department.</li><li>Details the compromise of using Bayesian interim monitoring and frequentist primary analyses under NIH and regulatory mandates.</li><li>Outlines design and publication of the ORBITA cardiovascular trial (Imperial College London), using all-Bayesian longitudinal ordinal methodology—Lancet reviewers required all analyses remain Bayesian, rejecting inclusion of a mixing frequentist and Bayesian analyses.</li><li>Critiques simulation of Type 1 error within Bayesian trial designs.</li><li>Addresses deficiencies in eliciting utilities for clinical endpoints, underscoring operational challenges in longitudinal ordinal modeling and ethical imperatives for efficient early stopping.</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of "In the Interim…", Dr. Scott Berry chats with Frank Harrell, a professor of Biostatistics at Vanderbilt University and W.J. Dixon Award winner. Harrell describes his transition from frequentist to Bayesian clinical trial design, prompted by a decisive meeting with Dr. Don Berry, informed by David Spiegelhalter’s published work. The dialogue addresses persistent academic opposition to Bayesian methods, operational constraints in trial implementation, regulatory work at FDA, and technical Bayesian modeling details.</p><p><strong>Key Highlights</strong></p><ul><li>Harrell credits Don Berry’s direct influence for converting him to Bayesian methods during his early career at Duke, despite entrenched academic resistance.</li><li>Discusses early cardiovascular research at Duke, experiences with large multicenter trials, and later founding Vanderbilt’s Biostatistics department.</li><li>Details the compromise of using Bayesian interim monitoring and frequentist primary analyses under NIH and regulatory mandates.</li><li>Outlines design and publication of the ORBITA cardiovascular trial (Imperial College London), using all-Bayesian longitudinal ordinal methodology—Lancet reviewers required all analyses remain Bayesian, rejecting inclusion of a mixing frequentist and Bayesian analyses.</li><li>Critiques simulation of Type 1 error within Bayesian trial designs.</li><li>Addresses deficiencies in eliciting utilities for clinical endpoints, underscoring operational challenges in longitudinal ordinal modeling and ethical imperatives for efficient early stopping.</li></ul>]]>
      </content:encoded>
      <pubDate>Mon, 15 Sep 2025 06:14:10 -0500</pubDate>
      <author>Berry</author>
      <enclosure url="https://media.transistor.fm/b146894f/8ffb3498.mp3" length="45469765" type="audio/mpeg"/>
      <itunes:author>Berry</itunes:author>
      <itunes:duration>2840</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode of "In the Interim…", Dr. Scott Berry chats with Frank Harrell, a professor of Biostatistics at Vanderbilt University and W.J. Dixon Award winner. Harrell describes his transition from frequentist to Bayesian clinical trial design, prompted by a decisive meeting with Dr. Don Berry, informed by David Spiegelhalter’s published work. The dialogue addresses persistent academic opposition to Bayesian methods, operational constraints in trial implementation, regulatory work at FDA, and technical Bayesian modeling details.</p><p><strong>Key Highlights</strong></p><ul><li>Harrell credits Don Berry’s direct influence for converting him to Bayesian methods during his early career at Duke, despite entrenched academic resistance.</li><li>Discusses early cardiovascular research at Duke, experiences with large multicenter trials, and later founding Vanderbilt’s Biostatistics department.</li><li>Details the compromise of using Bayesian interim monitoring and frequentist primary analyses under NIH and regulatory mandates.</li><li>Outlines design and publication of the ORBITA cardiovascular trial (Imperial College London), using all-Bayesian longitudinal ordinal methodology—Lancet reviewers required all analyses remain Bayesian, rejecting inclusion of a mixing frequentist and Bayesian analyses.</li><li>Critiques simulation of Type 1 error within Bayesian trial designs.</li><li>Addresses deficiencies in eliciting utilities for clinical endpoints, underscoring operational challenges in longitudinal ordinal modeling and ethical imperatives for efficient early stopping.</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>Bayesian statistics, clinical trials, biostatistics, FDA, Frank Harrell, Scott Berry, platform trials, longitudinal modeling, adaptive trials</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/b146894f/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/b146894f/transcript.json" type="application/json"/>
    </item>
    <item>
      <title>A Visit with Dr. Derek Angus</title>
      <itunes:episode>28</itunes:episode>
      <podcast:episode>28</podcast:episode>
      <itunes:title>A Visit with Dr. Derek Angus</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">7dd56637-427f-4def-bdd4-aa55dc5f2b0e</guid>
      <link>https://share.transistor.fm/s/9bc65ae9</link>
      <description>
        <![CDATA[<p>In this episode of “In the Interim…”, Dr. Scott Berry interviews Dr. Derek Angus, Distinguished Professor and Chair of Critical Care Medicine at the University of Pittsburgh and Senior Editor at JAMA. The discussion addresses the decades-long controversy surrounding steroid use in community-acquired pneumonia (CAP) and sepsis. The episode delivers a chronological assessment of the evidence base—summarizing trial results from pivotal studies, including CAPE COD, REMAP-CAP, ADRENAL, and multiple French trials led by Dr. Djillali Annane. Dr. Angus analyzes why discrepancies persist in outcomes, clinical recommendations, and international guidelines, and underscores the challenge of heterogeneous treatment effects. The episode closes with an argument for adaptive trial designs, Bayesian inference, and embedded randomization within learning health systems as critical tools for clarifying complex response patterns and improving patient care.</p><p><strong>Key Highlights</strong></p><ul><li>Historical evolution of clinical trials studying steroid regimens for CAP/sepsis</li><li>Review of CAPE COD, REMAP-CAP, ADRENAL, and Annane-led French trials showing conflicting signals.</li><li>Discussion of persistent heterogeneity in trial populations, interventions, and endpoints.</li><li>Identification of methodological limitations—control contamination, endpoint definitions, varying inclusion criteria.</li><li>Exploration of Bayesian and adaptive trial design, and operationalization of learning health systems to resolve evidence gaps.</li></ul><p>For more, visit: https://www.berryconsultants.com/</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of “In the Interim…”, Dr. Scott Berry interviews Dr. Derek Angus, Distinguished Professor and Chair of Critical Care Medicine at the University of Pittsburgh and Senior Editor at JAMA. The discussion addresses the decades-long controversy surrounding steroid use in community-acquired pneumonia (CAP) and sepsis. The episode delivers a chronological assessment of the evidence base—summarizing trial results from pivotal studies, including CAPE COD, REMAP-CAP, ADRENAL, and multiple French trials led by Dr. Djillali Annane. Dr. Angus analyzes why discrepancies persist in outcomes, clinical recommendations, and international guidelines, and underscores the challenge of heterogeneous treatment effects. The episode closes with an argument for adaptive trial designs, Bayesian inference, and embedded randomization within learning health systems as critical tools for clarifying complex response patterns and improving patient care.</p><p><strong>Key Highlights</strong></p><ul><li>Historical evolution of clinical trials studying steroid regimens for CAP/sepsis</li><li>Review of CAPE COD, REMAP-CAP, ADRENAL, and Annane-led French trials showing conflicting signals.</li><li>Discussion of persistent heterogeneity in trial populations, interventions, and endpoints.</li><li>Identification of methodological limitations—control contamination, endpoint definitions, varying inclusion criteria.</li><li>Exploration of Bayesian and adaptive trial design, and operationalization of learning health systems to resolve evidence gaps.</li></ul><p>For more, visit: https://www.berryconsultants.com/</p>]]>
      </content:encoded>
      <pubDate>Mon, 08 Sep 2025 06:03:19 -0500</pubDate>
      <author>Berry</author>
      <enclosure url="https://media.transistor.fm/9bc65ae9/171746f7.mp3" length="40329688" type="audio/mpeg"/>
      <itunes:author>Berry</itunes:author>
      <itunes:duration>2518</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode of “In the Interim…”, Dr. Scott Berry interviews Dr. Derek Angus, Distinguished Professor and Chair of Critical Care Medicine at the University of Pittsburgh and Senior Editor at JAMA. The discussion addresses the decades-long controversy surrounding steroid use in community-acquired pneumonia (CAP) and sepsis. The episode delivers a chronological assessment of the evidence base—summarizing trial results from pivotal studies, including CAPE COD, REMAP-CAP, ADRENAL, and multiple French trials led by Dr. Djillali Annane. Dr. Angus analyzes why discrepancies persist in outcomes, clinical recommendations, and international guidelines, and underscores the challenge of heterogeneous treatment effects. The episode closes with an argument for adaptive trial designs, Bayesian inference, and embedded randomization within learning health systems as critical tools for clarifying complex response patterns and improving patient care.</p><p><strong>Key Highlights</strong></p><ul><li>Historical evolution of clinical trials studying steroid regimens for CAP/sepsis</li><li>Review of CAPE COD, REMAP-CAP, ADRENAL, and Annane-led French trials showing conflicting signals.</li><li>Discussion of persistent heterogeneity in trial populations, interventions, and endpoints.</li><li>Identification of methodological limitations—control contamination, endpoint definitions, varying inclusion criteria.</li><li>Exploration of Bayesian and adaptive trial design, and operationalization of learning health systems to resolve evidence gaps.</li></ul><p>For more, visit: https://www.berryconsultants.com/</p>]]>
      </itunes:summary>
      <itunes:keywords>steroids, sepsis, pneumonia, critical care, clinical trials, COVID-19, REMAP-CAP, Bayesian, learning healthcare system</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/9bc65ae9/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/9bc65ae9/transcript.json" type="application/json"/>
    </item>
    <item>
      <title>The Mystery of Clinical Trial Simulation</title>
      <itunes:episode>27</itunes:episode>
      <podcast:episode>27</podcast:episode>
      <itunes:title>The Mystery of Clinical Trial Simulation</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">9892cc25-0d9e-4958-a15d-094ae9e992d1</guid>
      <link>https://share.transistor.fm/s/3c1345fd</link>
      <description>
        <![CDATA[<p>Dr. Scott Berry hosts this episode of "In the Interim…", opening with statistical analysis of elite athletes before focusing on the misunderstood role of clinical trial simulation. He distinguishes simulation as a predictive tool from its use as an in-silico process that enables trial design exploration, iteration, and optimization. Clinical trial simulation provides a mechanism for iterative comparison of multiple designs, driven by ongoing team feedback and evolving trial objectives. Scott stresses that rigid simulation plans are “not productive,” since the most effective designs typically emerge when stakeholders view real trial examples and suggest new design options in real time. The ICECAP trial serves as a key illustration, where the final design was shaped by simulation-informed team input across multiple iterations, from three tested durations to ten with response adaptive randomization. Scott also discusses the creation of the FACTS software, highlighting its ability to test alternative designs rapidly, present side-by-side comparisons, and conduct counterfactual analyses—revealing what different trial configurations would have produced using the same simulated datasets.</p><p><strong>Key Highlights</strong></p><ul><li>Simulation contrasted as a predictive tool versus engine for iterative design evaluation.</li><li>Emphasizes design process as team-driven and iterative, not prescriptive.</li><li>Use of concrete example trials enhances communication across multidisciplinary teams.</li><li>FACTS software enables design flexibility, in silico iteration, and comparative scenario analysis.</li><li>ICECAP trial as an instance of simulation-informed design adaptation.</li></ul><p>For more visit: https://www.berryconsultants.com/</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Dr. Scott Berry hosts this episode of "In the Interim…", opening with statistical analysis of elite athletes before focusing on the misunderstood role of clinical trial simulation. He distinguishes simulation as a predictive tool from its use as an in-silico process that enables trial design exploration, iteration, and optimization. Clinical trial simulation provides a mechanism for iterative comparison of multiple designs, driven by ongoing team feedback and evolving trial objectives. Scott stresses that rigid simulation plans are “not productive,” since the most effective designs typically emerge when stakeholders view real trial examples and suggest new design options in real time. The ICECAP trial serves as a key illustration, where the final design was shaped by simulation-informed team input across multiple iterations, from three tested durations to ten with response adaptive randomization. Scott also discusses the creation of the FACTS software, highlighting its ability to test alternative designs rapidly, present side-by-side comparisons, and conduct counterfactual analyses—revealing what different trial configurations would have produced using the same simulated datasets.</p><p><strong>Key Highlights</strong></p><ul><li>Simulation contrasted as a predictive tool versus engine for iterative design evaluation.</li><li>Emphasizes design process as team-driven and iterative, not prescriptive.</li><li>Use of concrete example trials enhances communication across multidisciplinary teams.</li><li>FACTS software enables design flexibility, in silico iteration, and comparative scenario analysis.</li><li>ICECAP trial as an instance of simulation-informed design adaptation.</li></ul><p>For more visit: https://www.berryconsultants.com/</p>]]>
      </content:encoded>
      <pubDate>Mon, 01 Sep 2025 06:00:00 -0500</pubDate>
      <author>Berry</author>
      <enclosure url="https://media.transistor.fm/3c1345fd/5b370de6.mp3" length="39997005" type="audio/mpeg"/>
      <itunes:author>Berry</itunes:author>
      <itunes:duration>2498</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Dr. Scott Berry hosts this episode of "In the Interim…", opening with statistical analysis of elite athletes before focusing on the misunderstood role of clinical trial simulation. He distinguishes simulation as a predictive tool from its use as an in-silico process that enables trial design exploration, iteration, and optimization. Clinical trial simulation provides a mechanism for iterative comparison of multiple designs, driven by ongoing team feedback and evolving trial objectives. Scott stresses that rigid simulation plans are “not productive,” since the most effective designs typically emerge when stakeholders view real trial examples and suggest new design options in real time. The ICECAP trial serves as a key illustration, where the final design was shaped by simulation-informed team input across multiple iterations, from three tested durations to ten with response adaptive randomization. Scott also discusses the creation of the FACTS software, highlighting its ability to test alternative designs rapidly, present side-by-side comparisons, and conduct counterfactual analyses—revealing what different trial configurations would have produced using the same simulated datasets.</p><p><strong>Key Highlights</strong></p><ul><li>Simulation contrasted as a predictive tool versus engine for iterative design evaluation.</li><li>Emphasizes design process as team-driven and iterative, not prescriptive.</li><li>Use of concrete example trials enhances communication across multidisciplinary teams.</li><li>FACTS software enables design flexibility, in silico iteration, and comparative scenario analysis.</li><li>ICECAP trial as an instance of simulation-informed design adaptation.</li></ul><p>For more visit: https://www.berryconsultants.com/</p>]]>
      </itunes:summary>
      <itunes:keywords>clinical trial simulation, in-silico design, data-driven trials, FACTS software, adaptive clinical trials, trial design, statistical modeling, drug development</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/3c1345fd/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/3c1345fd/transcript.json" type="application/json"/>
    </item>
    <item>
      <title>Discussions on the ICH E20 Draft Guidance</title>
      <itunes:episode>26</itunes:episode>
      <podcast:episode>26</podcast:episode>
      <itunes:title>Discussions on the ICH E20 Draft Guidance</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">75ed3baf-54d8-4cda-82a7-1e75b9da6af2</guid>
      <link>https://share.transistor.fm/s/f3255fad</link>
      <description>
        <![CDATA[<p>In this episode of "In the Interim…", Dr. Scott Berry and Dr. Kert Viele review the ICH E20 draft guidance on adaptive clinical trial designs, offering a technical yet accessible breakdown for trial sponsors, practitioners, and those interested in clinical development. Drawing on their practical experience in creating and presenting adaptive trial designs to regulators, they discuss the document’s strengths, areas of consensus, and where cautionary or restrictive language appears. Listeners are guided through the evolving regulatory landscape, distinctions between Bayesian and frequentist approaches, and what new harmonization efforts mean for planning adaptive confirmatory trials. The episode conveys hands-on examples, such as the Sepsis ACT seamless trial and the ROAR pan-tumor trial, illustrating technical points with real-world context. Key operational topics—blinding, operational bias, adaptive design reports, and clinical trial simulations—are addressed. The discussion includes practical advice on navigating regulatory dialogue, limitations of ICH E20 in early-phase or nontraditional designs, and the necessity of clear, justification for adaptive (complex) trial features.</p><p><strong>Key Highlights</strong></p><ul><li>ICH E20 as a global regulatory framework for adaptive designs</li><li>Tone and caution in guidance may shape sponsor interpretation</li><li>Seamless, Bayesian, and enrichment all confirmatory trials</li><li>Operational guidance: reporting, simulation, interim, and blinding requirements</li><li>Emphasis on justification and transparent communication with regulators</li></ul><p>For more, visit: https://www.berryconsultants.com/</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of "In the Interim…", Dr. Scott Berry and Dr. Kert Viele review the ICH E20 draft guidance on adaptive clinical trial designs, offering a technical yet accessible breakdown for trial sponsors, practitioners, and those interested in clinical development. Drawing on their practical experience in creating and presenting adaptive trial designs to regulators, they discuss the document’s strengths, areas of consensus, and where cautionary or restrictive language appears. Listeners are guided through the evolving regulatory landscape, distinctions between Bayesian and frequentist approaches, and what new harmonization efforts mean for planning adaptive confirmatory trials. The episode conveys hands-on examples, such as the Sepsis ACT seamless trial and the ROAR pan-tumor trial, illustrating technical points with real-world context. Key operational topics—blinding, operational bias, adaptive design reports, and clinical trial simulations—are addressed. The discussion includes practical advice on navigating regulatory dialogue, limitations of ICH E20 in early-phase or nontraditional designs, and the necessity of clear, justification for adaptive (complex) trial features.</p><p><strong>Key Highlights</strong></p><ul><li>ICH E20 as a global regulatory framework for adaptive designs</li><li>Tone and caution in guidance may shape sponsor interpretation</li><li>Seamless, Bayesian, and enrichment all confirmatory trials</li><li>Operational guidance: reporting, simulation, interim, and blinding requirements</li><li>Emphasis on justification and transparent communication with regulators</li></ul><p>For more, visit: https://www.berryconsultants.com/</p>]]>
      </content:encoded>
      <pubDate>Mon, 25 Aug 2025 06:00:00 -0500</pubDate>
      <author>Berry</author>
      <enclosure url="https://media.transistor.fm/f3255fad/d50aeeeb.mp3" length="36787079" type="audio/mpeg"/>
      <itunes:author>Berry</itunes:author>
      <itunes:duration>2297</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode of "In the Interim…", Dr. Scott Berry and Dr. Kert Viele review the ICH E20 draft guidance on adaptive clinical trial designs, offering a technical yet accessible breakdown for trial sponsors, practitioners, and those interested in clinical development. Drawing on their practical experience in creating and presenting adaptive trial designs to regulators, they discuss the document’s strengths, areas of consensus, and where cautionary or restrictive language appears. Listeners are guided through the evolving regulatory landscape, distinctions between Bayesian and frequentist approaches, and what new harmonization efforts mean for planning adaptive confirmatory trials. The episode conveys hands-on examples, such as the Sepsis ACT seamless trial and the ROAR pan-tumor trial, illustrating technical points with real-world context. Key operational topics—blinding, operational bias, adaptive design reports, and clinical trial simulations—are addressed. The discussion includes practical advice on navigating regulatory dialogue, limitations of ICH E20 in early-phase or nontraditional designs, and the necessity of clear, justification for adaptive (complex) trial features.</p><p><strong>Key Highlights</strong></p><ul><li>ICH E20 as a global regulatory framework for adaptive designs</li><li>Tone and caution in guidance may shape sponsor interpretation</li><li>Seamless, Bayesian, and enrichment all confirmatory trials</li><li>Operational guidance: reporting, simulation, interim, and blinding requirements</li><li>Emphasis on justification and transparent communication with regulators</li></ul><p>For more, visit: https://www.berryconsultants.com/</p>]]>
      </itunes:summary>
      <itunes:keywords>ICH E20, adaptive trial design, Bayesian methods, global harmonization, clinical trial guidance, regulatory affairs, platform trials, enrichment, seamless trials, type one error</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/f3255fad/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/f3255fad/transcript.json" type="application/json"/>
    </item>
    <item>
      <title>A Discussion with Michael Proschan on Response-Adaptive Randomization</title>
      <itunes:episode>25</itunes:episode>
      <podcast:episode>25</podcast:episode>
      <itunes:title>A Discussion with Michael Proschan on Response-Adaptive Randomization</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">973659a5-cc71-4f20-b595-991f8d59e449</guid>
      <link>https://share.transistor.fm/s/de231ccc</link>
      <description>
        <![CDATA[<p>In this episode of "In the Interim…", Dr. Scott Berry and NIH’s Dr. Michael Proschan conduct a detailed discussion from opposing viewpoints on response-adaptive randomization (RAR) in clinical trials. The discussion focuses on where they agree – on the positives and negatives of RAR, and where they disagree on its scientific use. </p><p><strong>Key Highlights</strong></p><ul><li>Potential issues of using RAR: Potential temporal trends, unblinding, reduction in statistical efficiency in 2-arm trials</li><li>Potential benefits include improved statistical efficiency in multi-arm trials  depending on the goals (e.g. dose-finding trials).</li><li>Potential unblinding of results in non-blinded trials and the need for operational excellence.</li><li>Ethical and Bayesian perspectives are considered, but emphasis remains empirical.</li></ul><p>For more visit: <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of "In the Interim…", Dr. Scott Berry and NIH’s Dr. Michael Proschan conduct a detailed discussion from opposing viewpoints on response-adaptive randomization (RAR) in clinical trials. The discussion focuses on where they agree – on the positives and negatives of RAR, and where they disagree on its scientific use. </p><p><strong>Key Highlights</strong></p><ul><li>Potential issues of using RAR: Potential temporal trends, unblinding, reduction in statistical efficiency in 2-arm trials</li><li>Potential benefits include improved statistical efficiency in multi-arm trials  depending on the goals (e.g. dose-finding trials).</li><li>Potential unblinding of results in non-blinded trials and the need for operational excellence.</li><li>Ethical and Bayesian perspectives are considered, but emphasis remains empirical.</li></ul><p>For more visit: <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </content:encoded>
      <pubDate>Mon, 18 Aug 2025 06:03:24 -0500</pubDate>
      <author>Berry</author>
      <enclosure url="https://media.transistor.fm/de231ccc/6bc21202.mp3" length="42994219" type="audio/mpeg"/>
      <itunes:author>Berry</itunes:author>
      <itunes:duration>2685</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode of "In the Interim…", Dr. Scott Berry and NIH’s Dr. Michael Proschan conduct a detailed discussion from opposing viewpoints on response-adaptive randomization (RAR) in clinical trials. The discussion focuses on where they agree – on the positives and negatives of RAR, and where they disagree on its scientific use. </p><p><strong>Key Highlights</strong></p><ul><li>Potential issues of using RAR: Potential temporal trends, unblinding, reduction in statistical efficiency in 2-arm trials</li><li>Potential benefits include improved statistical efficiency in multi-arm trials  depending on the goals (e.g. dose-finding trials).</li><li>Potential unblinding of results in non-blinded trials and the need for operational excellence.</li><li>Ethical and Bayesian perspectives are considered, but emphasis remains empirical.</li></ul><p>For more visit: <a href="https://www.berryconsultants.com/">https://www.berryconsultants.com/</a></p>]]>
      </itunes:summary>
      <itunes:keywords>response adaptive randomization, adaptive trials, clinical trial design, platform trials, statistical methods, NIH, Berry Consultants</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/de231ccc/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/de231ccc/transcript.json" type="application/json"/>
    </item>
    <item>
      <title>STEP Statistical Modeling</title>
      <itunes:episode>24</itunes:episode>
      <podcast:episode>24</podcast:episode>
      <itunes:title>STEP Statistical Modeling</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">2ae9ee63-2c8f-4e5b-be4a-3386f1ed13bb</guid>
      <link>https://share.transistor.fm/s/8da307b6</link>
      <description>
        <![CDATA[<p>In this episode of "In the Interim…", Dr. Scott Berry, Dr. Elizabeth Lorenzi, and Dr. Amy Crawford discuss the STEP platform trial’s statistical methodology for evaluating which acute stroke patients benefit and which do not from endovascular therapy (EVT). The discussion critiques the inadequacy of traditional clinical trials powered for a single population to show benefit, as the goal of the trial is to identify who benefits, not if the entire population has a net benefit. The team walks through the development and simulation of a Bayesian change point model, addressing heterogeneous treatment responses across the NIH Stroke Scale. The adaptive platform design leverages scheduled interim analyses to draw timely, data-driven conclusions about patient subgroups, improving trial efficiency and relevance. The episode also previews scaling to two-dimensional modeling, incorporating both stroke severity and time since last known well, and emphasizes ongoing clinical trial simulation and close integration between clinicians and statisticians throughout trial design and execution.</p><p><strong>Key Highlights</strong></p><ul><li>STEP platform master protocol and the NIH StrokeNet collaborative infrastructure</li><li>Clinical rationale for Bayesian change point modeling of the effect of EVT across the patients</li><li>Shift from single to dual change point models to reflect regions of equivalence</li><li>Development of custom C code and MCMC samplers due to limits of standard tools</li><li>Interim analyses direct adaptive enrollment and define actionable conclusions</li><li>Future extensions to multidimensional change point curves modeling </li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of "In the Interim…", Dr. Scott Berry, Dr. Elizabeth Lorenzi, and Dr. Amy Crawford discuss the STEP platform trial’s statistical methodology for evaluating which acute stroke patients benefit and which do not from endovascular therapy (EVT). The discussion critiques the inadequacy of traditional clinical trials powered for a single population to show benefit, as the goal of the trial is to identify who benefits, not if the entire population has a net benefit. The team walks through the development and simulation of a Bayesian change point model, addressing heterogeneous treatment responses across the NIH Stroke Scale. The adaptive platform design leverages scheduled interim analyses to draw timely, data-driven conclusions about patient subgroups, improving trial efficiency and relevance. The episode also previews scaling to two-dimensional modeling, incorporating both stroke severity and time since last known well, and emphasizes ongoing clinical trial simulation and close integration between clinicians and statisticians throughout trial design and execution.</p><p><strong>Key Highlights</strong></p><ul><li>STEP platform master protocol and the NIH StrokeNet collaborative infrastructure</li><li>Clinical rationale for Bayesian change point modeling of the effect of EVT across the patients</li><li>Shift from single to dual change point models to reflect regions of equivalence</li><li>Development of custom C code and MCMC samplers due to limits of standard tools</li><li>Interim analyses direct adaptive enrollment and define actionable conclusions</li><li>Future extensions to multidimensional change point curves modeling </li></ul>]]>
      </content:encoded>
      <pubDate>Mon, 11 Aug 2025 06:00:00 -0500</pubDate>
      <author>Berry Consultants</author>
      <enclosure url="https://media.transistor.fm/8da307b6/d7957e0c.mp3" length="32410195" type="audio/mpeg"/>
      <itunes:author>Berry Consultants</itunes:author>
      <itunes:duration>2023</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode of "In the Interim…", Dr. Scott Berry, Dr. Elizabeth Lorenzi, and Dr. Amy Crawford discuss the STEP platform trial’s statistical methodology for evaluating which acute stroke patients benefit and which do not from endovascular therapy (EVT). The discussion critiques the inadequacy of traditional clinical trials powered for a single population to show benefit, as the goal of the trial is to identify who benefits, not if the entire population has a net benefit. The team walks through the development and simulation of a Bayesian change point model, addressing heterogeneous treatment responses across the NIH Stroke Scale. The adaptive platform design leverages scheduled interim analyses to draw timely, data-driven conclusions about patient subgroups, improving trial efficiency and relevance. The episode also previews scaling to two-dimensional modeling, incorporating both stroke severity and time since last known well, and emphasizes ongoing clinical trial simulation and close integration between clinicians and statisticians throughout trial design and execution.</p><p><strong>Key Highlights</strong></p><ul><li>STEP platform master protocol and the NIH StrokeNet collaborative infrastructure</li><li>Clinical rationale for Bayesian change point modeling of the effect of EVT across the patients</li><li>Shift from single to dual change point models to reflect regions of equivalence</li><li>Development of custom C code and MCMC samplers due to limits of standard tools</li><li>Interim analyses direct adaptive enrollment and define actionable conclusions</li><li>Future extensions to multidimensional change point curves modeling </li></ul>]]>
      </itunes:summary>
      <itunes:keywords>adaptive trial design, Bayesian models, stroke trials, STEP platform, endovascular therapy, clinical research, platform trials, Berry Consultants</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/8da307b6/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/8da307b6/transcript.json" type="application/json"/>
    </item>
    <item>
      <title>Bayesian Approach in Clinical Trials</title>
      <itunes:episode>23</itunes:episode>
      <podcast:episode>23</podcast:episode>
      <itunes:title>Bayesian Approach in Clinical Trials</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">aa0a22fb-2dcd-4e48-95f0-30c0166f249a</guid>
      <link>https://share.transistor.fm/s/b89178af</link>
      <description>
        <![CDATA[<p>This episode of "In the Interim…" features Dr. Scott Berry, Dr. Kert Viele, and Dr. Melanie Quintana of Berry Consultants dissecting the technical and operational landscape of Bayesian statistics in clinical trial design. The episode discussed what is Bayesian statistics, the impact of informative and non-informative priors, and clarifies when and why Bayesian approaches surpass frequentist analyses—especially in adaptive, platform, and rare disease trial settings. The discussion directly challenges the misconception that Bayesian methods “lower the bar," presenting evidence that they often require broader data synthesis and can raise evidentiary standards.</p><p>Key regulatory developments at FDA and EMA are reviewed, with attention to updated guidance and increased adoption. Case studies illustrate Bayesian methods in practice, including the prospectively combined phase 2 and 3 analysis for REBYOTA approval; hierarchical modeling in GNE myopathy; shared controls and endpoint integration in the HEALEY ALS Platform Trial; and robust subgroup borrowing in the ROAR basket trial. The team also addresses technical challenges such as multiplicity, subgroup analysis, complexity in endpoint modeling, and appropriate strategies for blending Bayesian and frequentist approaches for maximum regulatory and scientific clarity.</p><p><strong>Key Highlights</strong></p><ul><li>Clear explanation and real-world examples of Bayesian analysis in clinical trials.</li><li>Theoretical and practical distinctions from frequentist methods</li><li>Practical breakdown of control sharing, endpoint integration, and subgroup borrowing.</li><li>Regulatory position and the increasing acceptance of Bayesian trial designs and analyses.</li><li>Case examples: REBYOTA, GNE myopathy, HEALY ALS Platform Trial, ROAR basket trial.</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode of "In the Interim…" features Dr. Scott Berry, Dr. Kert Viele, and Dr. Melanie Quintana of Berry Consultants dissecting the technical and operational landscape of Bayesian statistics in clinical trial design. The episode discussed what is Bayesian statistics, the impact of informative and non-informative priors, and clarifies when and why Bayesian approaches surpass frequentist analyses—especially in adaptive, platform, and rare disease trial settings. The discussion directly challenges the misconception that Bayesian methods “lower the bar," presenting evidence that they often require broader data synthesis and can raise evidentiary standards.</p><p>Key regulatory developments at FDA and EMA are reviewed, with attention to updated guidance and increased adoption. Case studies illustrate Bayesian methods in practice, including the prospectively combined phase 2 and 3 analysis for REBYOTA approval; hierarchical modeling in GNE myopathy; shared controls and endpoint integration in the HEALEY ALS Platform Trial; and robust subgroup borrowing in the ROAR basket trial. The team also addresses technical challenges such as multiplicity, subgroup analysis, complexity in endpoint modeling, and appropriate strategies for blending Bayesian and frequentist approaches for maximum regulatory and scientific clarity.</p><p><strong>Key Highlights</strong></p><ul><li>Clear explanation and real-world examples of Bayesian analysis in clinical trials.</li><li>Theoretical and practical distinctions from frequentist methods</li><li>Practical breakdown of control sharing, endpoint integration, and subgroup borrowing.</li><li>Regulatory position and the increasing acceptance of Bayesian trial designs and analyses.</li><li>Case examples: REBYOTA, GNE myopathy, HEALY ALS Platform Trial, ROAR basket trial.</li></ul>]]>
      </content:encoded>
      <pubDate>Mon, 04 Aug 2025 06:00:00 -0500</pubDate>
      <author>Berry</author>
      <enclosure url="https://media.transistor.fm/b89178af/1ea93d44.mp3" length="42009474" type="audio/mpeg"/>
      <itunes:author>Berry</itunes:author>
      <itunes:duration>2623</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode of "In the Interim…" features Dr. Scott Berry, Dr. Kert Viele, and Dr. Melanie Quintana of Berry Consultants dissecting the technical and operational landscape of Bayesian statistics in clinical trial design. The episode discussed what is Bayesian statistics, the impact of informative and non-informative priors, and clarifies when and why Bayesian approaches surpass frequentist analyses—especially in adaptive, platform, and rare disease trial settings. The discussion directly challenges the misconception that Bayesian methods “lower the bar," presenting evidence that they often require broader data synthesis and can raise evidentiary standards.</p><p>Key regulatory developments at FDA and EMA are reviewed, with attention to updated guidance and increased adoption. Case studies illustrate Bayesian methods in practice, including the prospectively combined phase 2 and 3 analysis for REBYOTA approval; hierarchical modeling in GNE myopathy; shared controls and endpoint integration in the HEALEY ALS Platform Trial; and robust subgroup borrowing in the ROAR basket trial. The team also addresses technical challenges such as multiplicity, subgroup analysis, complexity in endpoint modeling, and appropriate strategies for blending Bayesian and frequentist approaches for maximum regulatory and scientific clarity.</p><p><strong>Key Highlights</strong></p><ul><li>Clear explanation and real-world examples of Bayesian analysis in clinical trials.</li><li>Theoretical and practical distinctions from frequentist methods</li><li>Practical breakdown of control sharing, endpoint integration, and subgroup borrowing.</li><li>Regulatory position and the increasing acceptance of Bayesian trial designs and analyses.</li><li>Case examples: REBYOTA, GNE myopathy, HEALY ALS Platform Trial, ROAR basket trial.</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>Bayesian clinical trials, adaptive trials, platform trials, regulatory approval, rare disease research, Bayesian statistics, Berry Consultants, Bayesian methods, clinical development</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/b89178af/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/b89178af/transcript.json" type="application/json"/>
    </item>
    <item>
      <title>The Time Machine</title>
      <itunes:episode>22</itunes:episode>
      <podcast:episode>22</podcast:episode>
      <itunes:title>The Time Machine</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">9574671f-8b92-491c-9719-6e9668e2e4f7</guid>
      <link>https://share.transistor.fm/s/3539fb7a</link>
      <description>
        <![CDATA[<p>Dr. Scott Berry and Dr. Kert Viele discuss the origins and implementation of the “time machine” modeling approach, beginning with sports analytics and progressing to adaptive platform clinical trials. The episode focuses on how techniques for comparing athletes across eras translate into methodology for platform trials. </p><p><strong>Key Highlights</strong></p><ul><li>Sports analytics as foundation: Early work of modelling athlete comparisons across eras using bridging methodologies.</li><li>Platform trial application: The time machine model in I-SPY 2 enabled efficient control allocation through overlapping arms over extended trial periods.</li><li>Core modeling principles: Additive treatment effect assumptions and the necessity of sufficient temporal overlap for reliable era comparisons.</li><li>Statistical implementation: Approaches include categorical era adjustment and Bayesian smoothing splines for modeling change over time.</li><li>Limitations and disease specificity: In conditions with rapid clinical or epidemiologic change, such as COVID-19, non-concurrent controls are avoided due to high risk of era by treatment interaction.</li><li>Regulatory and methodological distinction: The model leverages within-trial overlapping data collected under a unified protocol, contrasting sharply with external or historical controls.</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Dr. Scott Berry and Dr. Kert Viele discuss the origins and implementation of the “time machine” modeling approach, beginning with sports analytics and progressing to adaptive platform clinical trials. The episode focuses on how techniques for comparing athletes across eras translate into methodology for platform trials. </p><p><strong>Key Highlights</strong></p><ul><li>Sports analytics as foundation: Early work of modelling athlete comparisons across eras using bridging methodologies.</li><li>Platform trial application: The time machine model in I-SPY 2 enabled efficient control allocation through overlapping arms over extended trial periods.</li><li>Core modeling principles: Additive treatment effect assumptions and the necessity of sufficient temporal overlap for reliable era comparisons.</li><li>Statistical implementation: Approaches include categorical era adjustment and Bayesian smoothing splines for modeling change over time.</li><li>Limitations and disease specificity: In conditions with rapid clinical or epidemiologic change, such as COVID-19, non-concurrent controls are avoided due to high risk of era by treatment interaction.</li><li>Regulatory and methodological distinction: The model leverages within-trial overlapping data collected under a unified protocol, contrasting sharply with external or historical controls.</li></ul>]]>
      </content:encoded>
      <pubDate>Mon, 28 Jul 2025 06:00:00 -0500</pubDate>
      <author>Berry</author>
      <enclosure url="https://media.transistor.fm/3539fb7a/352a3b6d.mp3" length="37603747" type="audio/mpeg"/>
      <itunes:author>Berry</itunes:author>
      <itunes:duration>2348</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Dr. Scott Berry and Dr. Kert Viele discuss the origins and implementation of the “time machine” modeling approach, beginning with sports analytics and progressing to adaptive platform clinical trials. The episode focuses on how techniques for comparing athletes across eras translate into methodology for platform trials. </p><p><strong>Key Highlights</strong></p><ul><li>Sports analytics as foundation: Early work of modelling athlete comparisons across eras using bridging methodologies.</li><li>Platform trial application: The time machine model in I-SPY 2 enabled efficient control allocation through overlapping arms over extended trial periods.</li><li>Core modeling principles: Additive treatment effect assumptions and the necessity of sufficient temporal overlap for reliable era comparisons.</li><li>Statistical implementation: Approaches include categorical era adjustment and Bayesian smoothing splines for modeling change over time.</li><li>Limitations and disease specificity: In conditions with rapid clinical or epidemiologic change, such as COVID-19, non-concurrent controls are avoided due to high risk of era by treatment interaction.</li><li>Regulatory and methodological distinction: The model leverages within-trial overlapping data collected under a unified protocol, contrasting sharply with external or historical controls.</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>statistical time machine, platform trials, sports analytics, adaptive clinical trials, historical controls, Bayesian models, overlapping arms</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/3539fb7a/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/3539fb7a/transcript.json" type="application/json"/>
    </item>
    <item>
      <title>The Legend of I-SPY 2 - Part B</title>
      <itunes:episode>21</itunes:episode>
      <podcast:episode>21</podcast:episode>
      <itunes:title>The Legend of I-SPY 2 - Part B</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">fd32da37-23d5-4bf7-a5a5-3eb308cb7769</guid>
      <link>https://share.transistor.fm/s/8837ad86</link>
      <description>
        <![CDATA[<p>In this episode, Dr. Don Berry and Dr. Scott Berry provide an in-depth account of I-SPY 2, focusing on the trial’s use of the “time machine” methodology—a Bayesian solution allowing bridging across arms to inform ongoing analyses. The discussion details how predictive probabilities and adaptive randomization shaped pivotal decisions, including the handling of Pertuzumab’s approval and Neratinib’s subtype-specific performance. This episode also documents the technical and operational contributions of Laura Esserman, Anna Barker, Janet Woodcock, Meredith Buxton, and Ashish Sanil, clarifying the roles that enabled the platform’s success and broader impact on subsequent adaptive trials.</p><p><strong>Key Highlights</strong></p><ul><li>Introduction of the “time machine” concept, enabling valid comparison between experimental and control arms even when enrollment periods differ—a pragmatic solution originally utilized in sports examples for evolving platform trials as treatments and control arms change.</li><li>Ongoing trial conduct driven by a Bayesian adaptive algorithm, developed and maintained by Berry Consultants statisticians, which computes predictive probabilities to guide arm graduation, futility, and real-time adjustment of randomization probabilities.</li><li>Neratinib serves as a case study in subtype-specific adaptive randomization: the platform set randomization probability to zero in subtypes without signal, while effective subtypes increased randomization and advanced to graduation.</li><li>I-SPY 2’s methodologies shaped subsequent adaptive platform trials (GBM AGILE, Precision Promise, COVID-19 ACTIV networks), with regulatory acceptance reflected in FDA guidance and Janet Woodcock’s public recognition of adaptive randomization as “adequate and well controlled” for registration studies.</li><li>Specific recognition: Laura Esserman (trial leadership), Anna Barker (funding and strategic input), Janet Woodcock (FDA guidance and adaptive methods support), Meredith Buxton (logistics; GCAR leadership), and Ashish Sanil (Berry Consultants; ongoing algorithm implementation).</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, Dr. Don Berry and Dr. Scott Berry provide an in-depth account of I-SPY 2, focusing on the trial’s use of the “time machine” methodology—a Bayesian solution allowing bridging across arms to inform ongoing analyses. The discussion details how predictive probabilities and adaptive randomization shaped pivotal decisions, including the handling of Pertuzumab’s approval and Neratinib’s subtype-specific performance. This episode also documents the technical and operational contributions of Laura Esserman, Anna Barker, Janet Woodcock, Meredith Buxton, and Ashish Sanil, clarifying the roles that enabled the platform’s success and broader impact on subsequent adaptive trials.</p><p><strong>Key Highlights</strong></p><ul><li>Introduction of the “time machine” concept, enabling valid comparison between experimental and control arms even when enrollment periods differ—a pragmatic solution originally utilized in sports examples for evolving platform trials as treatments and control arms change.</li><li>Ongoing trial conduct driven by a Bayesian adaptive algorithm, developed and maintained by Berry Consultants statisticians, which computes predictive probabilities to guide arm graduation, futility, and real-time adjustment of randomization probabilities.</li><li>Neratinib serves as a case study in subtype-specific adaptive randomization: the platform set randomization probability to zero in subtypes without signal, while effective subtypes increased randomization and advanced to graduation.</li><li>I-SPY 2’s methodologies shaped subsequent adaptive platform trials (GBM AGILE, Precision Promise, COVID-19 ACTIV networks), with regulatory acceptance reflected in FDA guidance and Janet Woodcock’s public recognition of adaptive randomization as “adequate and well controlled” for registration studies.</li><li>Specific recognition: Laura Esserman (trial leadership), Anna Barker (funding and strategic input), Janet Woodcock (FDA guidance and adaptive methods support), Meredith Buxton (logistics; GCAR leadership), and Ashish Sanil (Berry Consultants; ongoing algorithm implementation).</li></ul>]]>
      </content:encoded>
      <pubDate>Mon, 21 Jul 2025 05:50:44 -0500</pubDate>
      <author>Berry</author>
      <enclosure url="https://media.transistor.fm/8837ad86/1877d791.mp3" length="24753606" type="audio/mpeg"/>
      <itunes:author>Berry</itunes:author>
      <itunes:duration>1545</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode, Dr. Don Berry and Dr. Scott Berry provide an in-depth account of I-SPY 2, focusing on the trial’s use of the “time machine” methodology—a Bayesian solution allowing bridging across arms to inform ongoing analyses. The discussion details how predictive probabilities and adaptive randomization shaped pivotal decisions, including the handling of Pertuzumab’s approval and Neratinib’s subtype-specific performance. This episode also documents the technical and operational contributions of Laura Esserman, Anna Barker, Janet Woodcock, Meredith Buxton, and Ashish Sanil, clarifying the roles that enabled the platform’s success and broader impact on subsequent adaptive trials.</p><p><strong>Key Highlights</strong></p><ul><li>Introduction of the “time machine” concept, enabling valid comparison between experimental and control arms even when enrollment periods differ—a pragmatic solution originally utilized in sports examples for evolving platform trials as treatments and control arms change.</li><li>Ongoing trial conduct driven by a Bayesian adaptive algorithm, developed and maintained by Berry Consultants statisticians, which computes predictive probabilities to guide arm graduation, futility, and real-time adjustment of randomization probabilities.</li><li>Neratinib serves as a case study in subtype-specific adaptive randomization: the platform set randomization probability to zero in subtypes without signal, while effective subtypes increased randomization and advanced to graduation.</li><li>I-SPY 2’s methodologies shaped subsequent adaptive platform trials (GBM AGILE, Precision Promise, COVID-19 ACTIV networks), with regulatory acceptance reflected in FDA guidance and Janet Woodcock’s public recognition of adaptive randomization as “adequate and well controlled” for registration studies.</li><li>Specific recognition: Laura Esserman (trial leadership), Anna Barker (funding and strategic input), Janet Woodcock (FDA guidance and adaptive methods support), Meredith Buxton (logistics; GCAR leadership), and Ashish Sanil (Berry Consultants; ongoing algorithm implementation).</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>I-SPY 2, adaptive trials, platform trial, predictive probability, Bayesian statistics, clinical research innovation, drug development, Don Berry, Scott Berry</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/8837ad86/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/8837ad86/transcript.json" type="application/json"/>
    </item>
    <item>
      <title>The Legend of I-SPY 2 - Part A</title>
      <itunes:episode>20</itunes:episode>
      <podcast:episode>20</podcast:episode>
      <itunes:title>The Legend of I-SPY 2 - Part A</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">bec07061-cf6b-4c7d-94c4-fcce46220798</guid>
      <link>https://share.transistor.fm/s/0b0304b0</link>
      <description>
        <![CDATA[<p>In Episode 20 of Berry’s "In the Interim..." Podcast, The Legend of I-SPY 2 - Part A, Dr. Don Berry and Dr. Scott Berry discuss the origins and design of the I-SPY trials. Their conversation explains the inefficiency of traditional adjuvant breast cancer trials and details the shift to the neoadjuvant approach, where tumor response can be observed prior to surgery. </p><p>I-SPY 1 served as a proof-of-concept using MRI for probabilistic prediction of pathologic complete response (pCR). I-SPY 2 represents a major advancement in clinical trial science, introducing a multi-arm bandit methodology, integration of biomarker-driven subtypes and signatures, and a structured funding model that transitioned from philanthropy to “pay to play” industry support.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In Episode 20 of Berry’s "In the Interim..." Podcast, The Legend of I-SPY 2 - Part A, Dr. Don Berry and Dr. Scott Berry discuss the origins and design of the I-SPY trials. Their conversation explains the inefficiency of traditional adjuvant breast cancer trials and details the shift to the neoadjuvant approach, where tumor response can be observed prior to surgery. </p><p>I-SPY 1 served as a proof-of-concept using MRI for probabilistic prediction of pathologic complete response (pCR). I-SPY 2 represents a major advancement in clinical trial science, introducing a multi-arm bandit methodology, integration of biomarker-driven subtypes and signatures, and a structured funding model that transitioned from philanthropy to “pay to play” industry support.</p>]]>
      </content:encoded>
      <pubDate>Mon, 14 Jul 2025 05:50:55 -0500</pubDate>
      <author>Berry</author>
      <enclosure url="https://media.transistor.fm/0b0304b0/4de635ea.mp3" length="38581384" type="audio/mpeg"/>
      <itunes:author>Berry</itunes:author>
      <itunes:duration>2409</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In Episode 20 of Berry’s "In the Interim..." Podcast, The Legend of I-SPY 2 - Part A, Dr. Don Berry and Dr. Scott Berry discuss the origins and design of the I-SPY trials. Their conversation explains the inefficiency of traditional adjuvant breast cancer trials and details the shift to the neoadjuvant approach, where tumor response can be observed prior to surgery. </p><p>I-SPY 1 served as a proof-of-concept using MRI for probabilistic prediction of pathologic complete response (pCR). I-SPY 2 represents a major advancement in clinical trial science, introducing a multi-arm bandit methodology, integration of biomarker-driven subtypes and signatures, and a structured funding model that transitioned from philanthropy to “pay to play” industry support.</p>]]>
      </itunes:summary>
      <itunes:keywords>statistical science, clinical trials</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/0b0304b0/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/0b0304b0/transcript.json" type="application/json"/>
    </item>
    <item>
      <title>The STEP Platform with Dr. Eva Mistry and Dr. Jordan Elm</title>
      <itunes:episode>19</itunes:episode>
      <podcast:episode>19</podcast:episode>
      <itunes:title>The STEP Platform with Dr. Eva Mistry and Dr. Jordan Elm</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">d9fd8bc7-b4f1-4e21-bf20-d76cd3d83db8</guid>
      <link>https://share.transistor.fm/s/e332da08</link>
      <description>
        <![CDATA[<p>This episode of "In the Interim..." features an in-depth discussion of the StrokeNet Thrombectomy Endovascular Platform (STEP), a multi-domain, multi-factorial, adaptive platform trial for acute stroke, anchored in the NIH StrokeNet network. Guests Dr. Eva Mistry (University of Cincinnati) and Dr. Jordan Elm (Medical University of South Carolina) join us to explain how STEP enables simultaneous investigation of multiple treatment strategies in patients with acute ischemic stroke. The conversation details the use of a master protocol, the integration of industry partners through the Other Transactional Authority (OTA) NIH mechanism, and innovative statistical designs to efficiently identify improved treatment strategies.</p><p><strong>Key Highlights</strong>:</p><ul><li>STEP utilizes a master protocol within NIH StrokeNet, unifying eligibility, procedures, and data collection across all study domains.</li><li>The platform supports multiple research questions.</li><li>In an initial domain STEP applies a statistical change-point model to empirically estimate the thresholds where EVT is effective, neutral, or potentially deleterious based on medium vessel occlusions and baseline clinical status. </li><li>Protocols may be adapted in response to new external data, including pausing and revising enrollment in specific subpopulations when emerging science warrants.</li><li>Shared control groups are used wherever applicable, improving trial efficiency by reducing the number of patients allocated to control arms and allowing eligible patients to contribute to multiple domains when protocol and scientific rationale permit.</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode of "In the Interim..." features an in-depth discussion of the StrokeNet Thrombectomy Endovascular Platform (STEP), a multi-domain, multi-factorial, adaptive platform trial for acute stroke, anchored in the NIH StrokeNet network. Guests Dr. Eva Mistry (University of Cincinnati) and Dr. Jordan Elm (Medical University of South Carolina) join us to explain how STEP enables simultaneous investigation of multiple treatment strategies in patients with acute ischemic stroke. The conversation details the use of a master protocol, the integration of industry partners through the Other Transactional Authority (OTA) NIH mechanism, and innovative statistical designs to efficiently identify improved treatment strategies.</p><p><strong>Key Highlights</strong>:</p><ul><li>STEP utilizes a master protocol within NIH StrokeNet, unifying eligibility, procedures, and data collection across all study domains.</li><li>The platform supports multiple research questions.</li><li>In an initial domain STEP applies a statistical change-point model to empirically estimate the thresholds where EVT is effective, neutral, or potentially deleterious based on medium vessel occlusions and baseline clinical status. </li><li>Protocols may be adapted in response to new external data, including pausing and revising enrollment in specific subpopulations when emerging science warrants.</li><li>Shared control groups are used wherever applicable, improving trial efficiency by reducing the number of patients allocated to control arms and allowing eligible patients to contribute to multiple domains when protocol and scientific rationale permit.</li></ul>]]>
      </content:encoded>
      <pubDate>Mon, 07 Jul 2025 06:00:00 -0500</pubDate>
      <author>Berry</author>
      <enclosure url="https://media.transistor.fm/e332da08/c4e5fd03.mp3" length="39053687" type="audio/mpeg"/>
      <itunes:author>Berry</itunes:author>
      <itunes:duration>2439</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode of "In the Interim..." features an in-depth discussion of the StrokeNet Thrombectomy Endovascular Platform (STEP), a multi-domain, multi-factorial, adaptive platform trial for acute stroke, anchored in the NIH StrokeNet network. Guests Dr. Eva Mistry (University of Cincinnati) and Dr. Jordan Elm (Medical University of South Carolina) join us to explain how STEP enables simultaneous investigation of multiple treatment strategies in patients with acute ischemic stroke. The conversation details the use of a master protocol, the integration of industry partners through the Other Transactional Authority (OTA) NIH mechanism, and innovative statistical designs to efficiently identify improved treatment strategies.</p><p><strong>Key Highlights</strong>:</p><ul><li>STEP utilizes a master protocol within NIH StrokeNet, unifying eligibility, procedures, and data collection across all study domains.</li><li>The platform supports multiple research questions.</li><li>In an initial domain STEP applies a statistical change-point model to empirically estimate the thresholds where EVT is effective, neutral, or potentially deleterious based on medium vessel occlusions and baseline clinical status. </li><li>Protocols may be adapted in response to new external data, including pausing and revising enrollment in specific subpopulations when emerging science warrants.</li><li>Shared control groups are used wherever applicable, improving trial efficiency by reducing the number of patients allocated to control arms and allowing eligible patients to contribute to multiple domains when protocol and scientific rationale permit.</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>statistical science, clinical trials</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/e332da08/transcript.json" type="application/json"/>
      <podcast:transcript url="https://share.transistor.fm/s/e332da08/transcript.vtt" type="text/vtt" rel="captions"/>
    </item>
    <item>
      <title>A Statistician reads JAMA</title>
      <itunes:episode>18</itunes:episode>
      <podcast:episode>18</podcast:episode>
      <itunes:title>A Statistician reads JAMA</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">2cc64bce-246e-496a-9dda-d1ee8d94ff9a</guid>
      <link>https://share.transistor.fm/s/77430ff8</link>
      <description>
        <![CDATA[<p>Dr. Scott Berry applies a statistician’s review of a random trial result published in JAMA – the FAIR-HF2 clinical trial.  Interrogating the frequentist paradigm and the focus on the binary outcome of the primary hypothesis test. He scrutinizes the Hochberg multiplicity adjustment, challenges the prevailing disregard for accumulated scientific evidence, and contrasts the limitations of black/white view of clinical trial of over 1000 patients and 6 years of enrollment. A contrast is made to what a potential Bayesian approach, grounded in practical trial interpretation and evidence integration would look like. The episode argues how current norms, created by dogmatic statistical views, in clinical trial analysis can obscure or perhaps mislead from meaningful findings and limit the utility of costly, complex studies.</p><p><strong>Key Highlights</strong></p><ul><li>FAIR-HF2 randomized 1,105 patients with heart failure and iron deficiency to intravenous ferric carboxymaltose or placebo across 70 sites, with three pre-specified co-primary analyses.</li><li>The study relied on the Hochberg procedure to control family-wise error across analyses: (1) time to first cardiovascular death or heart failure hospitalization; (2) total heart failure hospitalizations; (3) time to first event in a highly iron-deficient subgroup.</li><li>Results showed a favorable hazard ratio (0.79) and a p-value below 0.05 for primary composite 1, but statistical significance was nullified under Hochberg multiplicity criteria as other endpoints failed threshold requirements.</li><li>Berry challenges the reduction of trial outcomes to discrete “significant” or “not significant” designations—critiquing the scientific and statistical culture that ignores gradient evidence in favor of only black-and-white outcomes.</li><li>He details the likelihood principle and Bayesian analysis as superior frameworks, quantifying a 98% posterior probability of benefit; he contextualizes findings with prior evidence from the HEART-FID, IRONMAN, and AFFIRM-AHF trials and published meta-analyses—arguing that isolated, negative conclusions defy cumulative data.</li><li>The discussion extends to the inefficiency of fixed trial designs, the missed value in adaptive methodologies, and the inefficacy of requiring full-scale repeat trials all analyzed in isolation, when evidence already points strongly to a beneficial effect.</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Dr. Scott Berry applies a statistician’s review of a random trial result published in JAMA – the FAIR-HF2 clinical trial.  Interrogating the frequentist paradigm and the focus on the binary outcome of the primary hypothesis test. He scrutinizes the Hochberg multiplicity adjustment, challenges the prevailing disregard for accumulated scientific evidence, and contrasts the limitations of black/white view of clinical trial of over 1000 patients and 6 years of enrollment. A contrast is made to what a potential Bayesian approach, grounded in practical trial interpretation and evidence integration would look like. The episode argues how current norms, created by dogmatic statistical views, in clinical trial analysis can obscure or perhaps mislead from meaningful findings and limit the utility of costly, complex studies.</p><p><strong>Key Highlights</strong></p><ul><li>FAIR-HF2 randomized 1,105 patients with heart failure and iron deficiency to intravenous ferric carboxymaltose or placebo across 70 sites, with three pre-specified co-primary analyses.</li><li>The study relied on the Hochberg procedure to control family-wise error across analyses: (1) time to first cardiovascular death or heart failure hospitalization; (2) total heart failure hospitalizations; (3) time to first event in a highly iron-deficient subgroup.</li><li>Results showed a favorable hazard ratio (0.79) and a p-value below 0.05 for primary composite 1, but statistical significance was nullified under Hochberg multiplicity criteria as other endpoints failed threshold requirements.</li><li>Berry challenges the reduction of trial outcomes to discrete “significant” or “not significant” designations—critiquing the scientific and statistical culture that ignores gradient evidence in favor of only black-and-white outcomes.</li><li>He details the likelihood principle and Bayesian analysis as superior frameworks, quantifying a 98% posterior probability of benefit; he contextualizes findings with prior evidence from the HEART-FID, IRONMAN, and AFFIRM-AHF trials and published meta-analyses—arguing that isolated, negative conclusions defy cumulative data.</li><li>The discussion extends to the inefficiency of fixed trial designs, the missed value in adaptive methodologies, and the inefficacy of requiring full-scale repeat trials all analyzed in isolation, when evidence already points strongly to a beneficial effect.</li></ul>]]>
      </content:encoded>
      <pubDate>Mon, 30 Jun 2025 06:00:00 -0500</pubDate>
      <author>Berry</author>
      <enclosure url="https://media.transistor.fm/77430ff8/fe31dbb0.mp3" length="37525597" type="audio/mpeg"/>
      <itunes:author>Berry</itunes:author>
      <itunes:duration>2343</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Dr. Scott Berry applies a statistician’s review of a random trial result published in JAMA – the FAIR-HF2 clinical trial.  Interrogating the frequentist paradigm and the focus on the binary outcome of the primary hypothesis test. He scrutinizes the Hochberg multiplicity adjustment, challenges the prevailing disregard for accumulated scientific evidence, and contrasts the limitations of black/white view of clinical trial of over 1000 patients and 6 years of enrollment. A contrast is made to what a potential Bayesian approach, grounded in practical trial interpretation and evidence integration would look like. The episode argues how current norms, created by dogmatic statistical views, in clinical trial analysis can obscure or perhaps mislead from meaningful findings and limit the utility of costly, complex studies.</p><p><strong>Key Highlights</strong></p><ul><li>FAIR-HF2 randomized 1,105 patients with heart failure and iron deficiency to intravenous ferric carboxymaltose or placebo across 70 sites, with three pre-specified co-primary analyses.</li><li>The study relied on the Hochberg procedure to control family-wise error across analyses: (1) time to first cardiovascular death or heart failure hospitalization; (2) total heart failure hospitalizations; (3) time to first event in a highly iron-deficient subgroup.</li><li>Results showed a favorable hazard ratio (0.79) and a p-value below 0.05 for primary composite 1, but statistical significance was nullified under Hochberg multiplicity criteria as other endpoints failed threshold requirements.</li><li>Berry challenges the reduction of trial outcomes to discrete “significant” or “not significant” designations—critiquing the scientific and statistical culture that ignores gradient evidence in favor of only black-and-white outcomes.</li><li>He details the likelihood principle and Bayesian analysis as superior frameworks, quantifying a 98% posterior probability of benefit; he contextualizes findings with prior evidence from the HEART-FID, IRONMAN, and AFFIRM-AHF trials and published meta-analyses—arguing that isolated, negative conclusions defy cumulative data.</li><li>The discussion extends to the inefficiency of fixed trial designs, the missed value in adaptive methodologies, and the inefficacy of requiring full-scale repeat trials all analyzed in isolation, when evidence already points strongly to a beneficial effect.</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>statistical science, clinical trials</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/77430ff8/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/77430ff8/transcript.json" type="application/json"/>
    </item>
    <item>
      <title>Seamless 2/3 Trial Designs</title>
      <itunes:episode>17</itunes:episode>
      <podcast:episode>17</podcast:episode>
      <itunes:title>Seamless 2/3 Trial Designs</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">e51f905a-129a-47f9-a92c-0d6743d82d1c</guid>
      <link>https://share.transistor.fm/s/32dfc78e</link>
      <description>
        <![CDATA[<p>Scott Berry convenes co-authors Kert Viele, Joe Marion, and Lindsay Berry to discuss the statistical and developmental nuances of inferentially seamless phase 2/3 clinical trial designs. The group dissects the simple method for distributing alpha when including stage 1 data, whether it is a good idea to distribute alpha, and the optimal allocation of sample size when Stage 1 data are carried forward, all referencing their recently published work in <em>Pharmaceutical Statistics.</em></p><p><strong>Key Highlights:</strong></p><ul><li>Systematic definition of seamless phase 2/3 trial designs, contrasting fixed, separate-phase models with integrated, inferentially seamless approaches.</li><li>Detailed explanation of the required alpha adjustment when selecting doses partway through—leveraging group sequential theory, normal approximations, and quadrature for explicit formula derivation; R code and calculation procedure are made available for practitioners.</li><li>Exploration of the information fraction curve for adjusted alpha, emphasizing that initial adjustment is numerically significant but does not inherently reduce statistical power.</li><li>Findings indicate that power is always higher when including stage 1 data – and outperforms a closed testing procedure.</li><li>Discussion of when seamless trials may not be advantageous: operational and statistical limitations: insufficient endpoint/regulatory understanding for phase 3, differences in manufacturing readiness, need for public phase 2 results for funding, and proof of concept hurdles; identifies real scenarios where seamless 2/3 designs are challenging.</li><li>Considerations for operational bias and blinding, with technical commentary on the boundaries of unblinding within company roles.</li><li>Provision of practical R code and explicit analytic guidance for calculating adjusted alpha in seamless design protocols.</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Scott Berry convenes co-authors Kert Viele, Joe Marion, and Lindsay Berry to discuss the statistical and developmental nuances of inferentially seamless phase 2/3 clinical trial designs. The group dissects the simple method for distributing alpha when including stage 1 data, whether it is a good idea to distribute alpha, and the optimal allocation of sample size when Stage 1 data are carried forward, all referencing their recently published work in <em>Pharmaceutical Statistics.</em></p><p><strong>Key Highlights:</strong></p><ul><li>Systematic definition of seamless phase 2/3 trial designs, contrasting fixed, separate-phase models with integrated, inferentially seamless approaches.</li><li>Detailed explanation of the required alpha adjustment when selecting doses partway through—leveraging group sequential theory, normal approximations, and quadrature for explicit formula derivation; R code and calculation procedure are made available for practitioners.</li><li>Exploration of the information fraction curve for adjusted alpha, emphasizing that initial adjustment is numerically significant but does not inherently reduce statistical power.</li><li>Findings indicate that power is always higher when including stage 1 data – and outperforms a closed testing procedure.</li><li>Discussion of when seamless trials may not be advantageous: operational and statistical limitations: insufficient endpoint/regulatory understanding for phase 3, differences in manufacturing readiness, need for public phase 2 results for funding, and proof of concept hurdles; identifies real scenarios where seamless 2/3 designs are challenging.</li><li>Considerations for operational bias and blinding, with technical commentary on the boundaries of unblinding within company roles.</li><li>Provision of practical R code and explicit analytic guidance for calculating adjusted alpha in seamless design protocols.</li></ul>]]>
      </content:encoded>
      <pubDate>Mon, 23 Jun 2025 06:00:00 -0500</pubDate>
      <author>Berry</author>
      <enclosure url="https://media.transistor.fm/32dfc78e/859e2509.mp3" length="44003548" type="audio/mpeg"/>
      <itunes:author>Berry</itunes:author>
      <itunes:duration>2748</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Scott Berry convenes co-authors Kert Viele, Joe Marion, and Lindsay Berry to discuss the statistical and developmental nuances of inferentially seamless phase 2/3 clinical trial designs. The group dissects the simple method for distributing alpha when including stage 1 data, whether it is a good idea to distribute alpha, and the optimal allocation of sample size when Stage 1 data are carried forward, all referencing their recently published work in <em>Pharmaceutical Statistics.</em></p><p><strong>Key Highlights:</strong></p><ul><li>Systematic definition of seamless phase 2/3 trial designs, contrasting fixed, separate-phase models with integrated, inferentially seamless approaches.</li><li>Detailed explanation of the required alpha adjustment when selecting doses partway through—leveraging group sequential theory, normal approximations, and quadrature for explicit formula derivation; R code and calculation procedure are made available for practitioners.</li><li>Exploration of the information fraction curve for adjusted alpha, emphasizing that initial adjustment is numerically significant but does not inherently reduce statistical power.</li><li>Findings indicate that power is always higher when including stage 1 data – and outperforms a closed testing procedure.</li><li>Discussion of when seamless trials may not be advantageous: operational and statistical limitations: insufficient endpoint/regulatory understanding for phase 3, differences in manufacturing readiness, need for public phase 2 results for funding, and proof of concept hurdles; identifies real scenarios where seamless 2/3 designs are challenging.</li><li>Considerations for operational bias and blinding, with technical commentary on the boundaries of unblinding within company roles.</li><li>Provision of practical R code and explicit analytic guidance for calculating adjusted alpha in seamless design protocols.</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>statistical science, clinical trials</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/32dfc78e/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/32dfc78e/transcript.json" type="application/json"/>
    </item>
    <item>
      <title>Spending Alpha</title>
      <itunes:episode>16</itunes:episode>
      <podcast:episode>16</podcast:episode>
      <itunes:title>Spending Alpha</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">d8d90b38-ea92-4667-9748-da83ef174e73</guid>
      <link>https://share.transistor.fm/s/860e3280</link>
      <description>
        <![CDATA[<p>In this solo episode of "In the Interim...", Scott Berry, President and Senior Statistical Scientist at Berry Consultants, addresses deep-rooted confusion in the field of adaptive clinical trial design surrounding the concept of “spending alpha.” Drawing on practical experience and rigorous statistical foundations, Berry addresses the prevailing language and myths that conflate interim analysis with loss of type I error. He clarifies that, with planned and transparent allocation of alpha, interim analyses enable more power with more efficient design, and robust clinical trials—without sacrificing statistical validity. This is a precise and fact-driven examination for those demanding technical clarity, not marketing gloss.</p><p><strong>Key Highlights</strong></p><ul><li>Explains the basics of hypothesis testing in superiority trials, highlighting why a one-sided 2.5% alpha is the operational standard despite persistent use of two-sided 5% language in clinical protocols.</li><li>Refutes the widespread belief that reviewing interim data costs available alpha, making clear that statistical error is not “penalized”—it is allocated, with potential efficiencies in average sample size and, in thoughtfully extended designs, gains in operating characteristics such as power.</li><li>Describes real-world examples, including the SEPSIS-ACT (selepressin) trial sponsored by Ferring Pharmaceuticals, which incorporated more than 20 interim analyses while maintaining a pre-specified final alpha of 0.025; underscores the necessity of transparent, prospective design and explicit documentation for regulatory acceptance.</li><li>Distinguishes between interim actions—such as futility analyses or response-adaptive randomization, which require no alpha adjustment, and early efficacy analyses, which must be precisely modeled to preserve type I error.</li><li>Challenges terminology like “penalty” and “spending alpha,” asserting that imprecise language fosters misunderstanding and leads to missed opportunities in adaptive trial efficiency.</li><li>Emphasizes the crucial role of prospective, simulation-based planning and clear protocol definition at every interim, anchoring statistical practice in measured evidence, not historical convention.</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this solo episode of "In the Interim...", Scott Berry, President and Senior Statistical Scientist at Berry Consultants, addresses deep-rooted confusion in the field of adaptive clinical trial design surrounding the concept of “spending alpha.” Drawing on practical experience and rigorous statistical foundations, Berry addresses the prevailing language and myths that conflate interim analysis with loss of type I error. He clarifies that, with planned and transparent allocation of alpha, interim analyses enable more power with more efficient design, and robust clinical trials—without sacrificing statistical validity. This is a precise and fact-driven examination for those demanding technical clarity, not marketing gloss.</p><p><strong>Key Highlights</strong></p><ul><li>Explains the basics of hypothesis testing in superiority trials, highlighting why a one-sided 2.5% alpha is the operational standard despite persistent use of two-sided 5% language in clinical protocols.</li><li>Refutes the widespread belief that reviewing interim data costs available alpha, making clear that statistical error is not “penalized”—it is allocated, with potential efficiencies in average sample size and, in thoughtfully extended designs, gains in operating characteristics such as power.</li><li>Describes real-world examples, including the SEPSIS-ACT (selepressin) trial sponsored by Ferring Pharmaceuticals, which incorporated more than 20 interim analyses while maintaining a pre-specified final alpha of 0.025; underscores the necessity of transparent, prospective design and explicit documentation for regulatory acceptance.</li><li>Distinguishes between interim actions—such as futility analyses or response-adaptive randomization, which require no alpha adjustment, and early efficacy analyses, which must be precisely modeled to preserve type I error.</li><li>Challenges terminology like “penalty” and “spending alpha,” asserting that imprecise language fosters misunderstanding and leads to missed opportunities in adaptive trial efficiency.</li><li>Emphasizes the crucial role of prospective, simulation-based planning and clear protocol definition at every interim, anchoring statistical practice in measured evidence, not historical convention.</li></ul>]]>
      </content:encoded>
      <pubDate>Mon, 09 Jun 2025 06:00:00 -0500</pubDate>
      <author>Berry</author>
      <enclosure url="https://media.transistor.fm/860e3280/41d91052.mp3" length="36497825" type="audio/mpeg"/>
      <itunes:author>Berry</itunes:author>
      <itunes:duration>2279</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this solo episode of "In the Interim...", Scott Berry, President and Senior Statistical Scientist at Berry Consultants, addresses deep-rooted confusion in the field of adaptive clinical trial design surrounding the concept of “spending alpha.” Drawing on practical experience and rigorous statistical foundations, Berry addresses the prevailing language and myths that conflate interim analysis with loss of type I error. He clarifies that, with planned and transparent allocation of alpha, interim analyses enable more power with more efficient design, and robust clinical trials—without sacrificing statistical validity. This is a precise and fact-driven examination for those demanding technical clarity, not marketing gloss.</p><p><strong>Key Highlights</strong></p><ul><li>Explains the basics of hypothesis testing in superiority trials, highlighting why a one-sided 2.5% alpha is the operational standard despite persistent use of two-sided 5% language in clinical protocols.</li><li>Refutes the widespread belief that reviewing interim data costs available alpha, making clear that statistical error is not “penalized”—it is allocated, with potential efficiencies in average sample size and, in thoughtfully extended designs, gains in operating characteristics such as power.</li><li>Describes real-world examples, including the SEPSIS-ACT (selepressin) trial sponsored by Ferring Pharmaceuticals, which incorporated more than 20 interim analyses while maintaining a pre-specified final alpha of 0.025; underscores the necessity of transparent, prospective design and explicit documentation for regulatory acceptance.</li><li>Distinguishes between interim actions—such as futility analyses or response-adaptive randomization, which require no alpha adjustment, and early efficacy analyses, which must be precisely modeled to preserve type I error.</li><li>Challenges terminology like “penalty” and “spending alpha,” asserting that imprecise language fosters misunderstanding and leads to missed opportunities in adaptive trial efficiency.</li><li>Emphasizes the crucial role of prospective, simulation-based planning and clear protocol definition at every interim, anchoring statistical practice in measured evidence, not historical convention.</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>statistical science, clinical trials</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/860e3280/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/860e3280/transcript.json" type="application/json"/>
    </item>
    <item>
      <title>Prof Craig Ritchie: Looking Back at EPAD, moving forward in Alzheimer's Disease</title>
      <itunes:episode>15</itunes:episode>
      <podcast:episode>15</podcast:episode>
      <itunes:title>Prof Craig Ritchie: Looking Back at EPAD, moving forward in Alzheimer's Disease</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">2ac9f1f5-59c5-41f0-874f-66cb4edf2ae7</guid>
      <link>https://share.transistor.fm/s/82d32ced</link>
      <description>
        <![CDATA[<p>Scott Berry, Founder of Berry Consultants, interviews Professor Craig Ritchie—specialist in brain health and neurodegenerative diseases, Chief Investigator of EPAD (European Prevention of Alzheimer Dementia), and CEO of Scottish Brain Sciences—for a broad discussion of platform trial methodology in Alzheimer’s Disease research as well as looking towards the future of drug development. The conversation dissects the origins and ambitions of the EPAD initiative, the conception and scientific function of the readiness cohort, and the pragmatic obstacles to deploying innovative trial models within rigid institutional frameworks. Professor Ritchie details why the EPAD platform trial failed to initiate any therapies, explores the fallout and industry shifts following COVID-19, and maps how Scottish Brain Sciences is directly applying these lessons—establishing the IONA readiness cohort to drive integration between clinical research and clinical practice. </p><p>Key Highlights<br>• Systematic review of EPAD’s objectives, specifically the platform trial and the development of a readiness cohort to streamline patient recruitment<br>• Detailed account of practical barriers that prevented EPAD from launching interventional arms, including pharmaceutical sponsor reluctance, inflexible IMI funding mechanisms, and the inherent risk aversion surrounding novel platform structures<br>• Discussion of participant contribution to research design and delivery—an early demonstration of patient involvement models now broadly recognized as best practice<br>• Analysis of COVID-19's dual impact—derailing EPAD's momentum while catalyzing a change in industry and regulatory acceptance of platform trials in drug development<br>• Tracing the origins and operationalization of the IONA readiness cohort at Scottish Brain Sciences, including direct integration of recruitment, biobanking, and engagement systems to address the translational gap in dementia medicine<br>• Evidence-based critique of persistent use of conventional clinical trial formats in Alzheimer’s disease, dissecting operational, financial, and data limitations that stall progress</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Scott Berry, Founder of Berry Consultants, interviews Professor Craig Ritchie—specialist in brain health and neurodegenerative diseases, Chief Investigator of EPAD (European Prevention of Alzheimer Dementia), and CEO of Scottish Brain Sciences—for a broad discussion of platform trial methodology in Alzheimer’s Disease research as well as looking towards the future of drug development. The conversation dissects the origins and ambitions of the EPAD initiative, the conception and scientific function of the readiness cohort, and the pragmatic obstacles to deploying innovative trial models within rigid institutional frameworks. Professor Ritchie details why the EPAD platform trial failed to initiate any therapies, explores the fallout and industry shifts following COVID-19, and maps how Scottish Brain Sciences is directly applying these lessons—establishing the IONA readiness cohort to drive integration between clinical research and clinical practice. </p><p>Key Highlights<br>• Systematic review of EPAD’s objectives, specifically the platform trial and the development of a readiness cohort to streamline patient recruitment<br>• Detailed account of practical barriers that prevented EPAD from launching interventional arms, including pharmaceutical sponsor reluctance, inflexible IMI funding mechanisms, and the inherent risk aversion surrounding novel platform structures<br>• Discussion of participant contribution to research design and delivery—an early demonstration of patient involvement models now broadly recognized as best practice<br>• Analysis of COVID-19's dual impact—derailing EPAD's momentum while catalyzing a change in industry and regulatory acceptance of platform trials in drug development<br>• Tracing the origins and operationalization of the IONA readiness cohort at Scottish Brain Sciences, including direct integration of recruitment, biobanking, and engagement systems to address the translational gap in dementia medicine<br>• Evidence-based critique of persistent use of conventional clinical trial formats in Alzheimer’s disease, dissecting operational, financial, and data limitations that stall progress</p>]]>
      </content:encoded>
      <pubDate>Mon, 02 Jun 2025 06:00:00 -0500</pubDate>
      <author>Berry</author>
      <enclosure url="https://media.transistor.fm/82d32ced/d0602c33.mp3" length="35830908" type="audio/mpeg"/>
      <itunes:author>Berry</itunes:author>
      <itunes:duration>2237</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Scott Berry, Founder of Berry Consultants, interviews Professor Craig Ritchie—specialist in brain health and neurodegenerative diseases, Chief Investigator of EPAD (European Prevention of Alzheimer Dementia), and CEO of Scottish Brain Sciences—for a broad discussion of platform trial methodology in Alzheimer’s Disease research as well as looking towards the future of drug development. The conversation dissects the origins and ambitions of the EPAD initiative, the conception and scientific function of the readiness cohort, and the pragmatic obstacles to deploying innovative trial models within rigid institutional frameworks. Professor Ritchie details why the EPAD platform trial failed to initiate any therapies, explores the fallout and industry shifts following COVID-19, and maps how Scottish Brain Sciences is directly applying these lessons—establishing the IONA readiness cohort to drive integration between clinical research and clinical practice. </p><p>Key Highlights<br>• Systematic review of EPAD’s objectives, specifically the platform trial and the development of a readiness cohort to streamline patient recruitment<br>• Detailed account of practical barriers that prevented EPAD from launching interventional arms, including pharmaceutical sponsor reluctance, inflexible IMI funding mechanisms, and the inherent risk aversion surrounding novel platform structures<br>• Discussion of participant contribution to research design and delivery—an early demonstration of patient involvement models now broadly recognized as best practice<br>• Analysis of COVID-19's dual impact—derailing EPAD's momentum while catalyzing a change in industry and regulatory acceptance of platform trials in drug development<br>• Tracing the origins and operationalization of the IONA readiness cohort at Scottish Brain Sciences, including direct integration of recruitment, biobanking, and engagement systems to address the translational gap in dementia medicine<br>• Evidence-based critique of persistent use of conventional clinical trial formats in Alzheimer’s disease, dissecting operational, financial, and data limitations that stall progress</p>]]>
      </itunes:summary>
      <itunes:keywords>statistical science, clinical trials</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/82d32ced/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/82d32ced/transcript.json" type="application/json"/>
    </item>
    <item>
      <title>Drug Developers' Lessons from Sports: Regression-to-the-Mean</title>
      <itunes:episode>14</itunes:episode>
      <podcast:episode>14</podcast:episode>
      <itunes:title>Drug Developers' Lessons from Sports: Regression-to-the-Mean</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">0f61581b-9652-46b7-86ae-c5ff3001e8c2</guid>
      <link>https://share.transistor.fm/s/83922b13</link>
      <description>
        <![CDATA[<p>In this engaging episode of "In the Interim...", host Dr. Scott Berry is joined by Dr. Nick Berry to explore the intriguing statistical parallels between sports and drug development, focusing on the concept of "regression-to-the-mean." Presenting examples that seem clear in sports, they discuss how these insights can illuminate the challenges faced in clinical trials and scientific inferences in medical decision making. Whether you're a statistician, drug developer, or sports enthusiast, this episode offers valuable perspectives on data interpretation and statistical phenomena.</p><p>Key Highlights:<br>• Discussion on how lessons from sports can benefit drug developers, emphasizing the concept of regression-to-the-mean.<br>• Personal anecdotes from Scott and Nick's experiences, illustrating statistical learning through sports.<br>• Examination of the regression-to-the-mean phenomenon through examples from baseball and golf.<br>• Exploration of how misunderstanding the regression-to-the-mean can lead to poor decision-making in clinical trials.<br>• Insights into placebo effects and how they are often confused with natural statistical phenomena.<br>• How regression-to-the-mean impacts expectations in financial markets and personal finance decision-making.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this engaging episode of "In the Interim...", host Dr. Scott Berry is joined by Dr. Nick Berry to explore the intriguing statistical parallels between sports and drug development, focusing on the concept of "regression-to-the-mean." Presenting examples that seem clear in sports, they discuss how these insights can illuminate the challenges faced in clinical trials and scientific inferences in medical decision making. Whether you're a statistician, drug developer, or sports enthusiast, this episode offers valuable perspectives on data interpretation and statistical phenomena.</p><p>Key Highlights:<br>• Discussion on how lessons from sports can benefit drug developers, emphasizing the concept of regression-to-the-mean.<br>• Personal anecdotes from Scott and Nick's experiences, illustrating statistical learning through sports.<br>• Examination of the regression-to-the-mean phenomenon through examples from baseball and golf.<br>• Exploration of how misunderstanding the regression-to-the-mean can lead to poor decision-making in clinical trials.<br>• Insights into placebo effects and how they are often confused with natural statistical phenomena.<br>• How regression-to-the-mean impacts expectations in financial markets and personal finance decision-making.</p>]]>
      </content:encoded>
      <pubDate>Mon, 26 May 2025 06:00:00 -0500</pubDate>
      <author>Berry</author>
      <enclosure url="https://media.transistor.fm/83922b13/46efa256.mp3" length="39547719" type="audio/mpeg"/>
      <itunes:author>Berry</itunes:author>
      <itunes:duration>2469</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this engaging episode of "In the Interim...", host Dr. Scott Berry is joined by Dr. Nick Berry to explore the intriguing statistical parallels between sports and drug development, focusing on the concept of "regression-to-the-mean." Presenting examples that seem clear in sports, they discuss how these insights can illuminate the challenges faced in clinical trials and scientific inferences in medical decision making. Whether you're a statistician, drug developer, or sports enthusiast, this episode offers valuable perspectives on data interpretation and statistical phenomena.</p><p>Key Highlights:<br>• Discussion on how lessons from sports can benefit drug developers, emphasizing the concept of regression-to-the-mean.<br>• Personal anecdotes from Scott and Nick's experiences, illustrating statistical learning through sports.<br>• Examination of the regression-to-the-mean phenomenon through examples from baseball and golf.<br>• Exploration of how misunderstanding the regression-to-the-mean can lead to poor decision-making in clinical trials.<br>• Insights into placebo effects and how they are often confused with natural statistical phenomena.<br>• How regression-to-the-mean impacts expectations in financial markets and personal finance decision-making.</p>]]>
      </itunes:summary>
      <itunes:keywords>statistical science, clinical trials</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/83922b13/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/83922b13/transcript.json" type="application/json"/>
    </item>
    <item>
      <title>DSMBs in Adaptive Trials with Roger Lewis</title>
      <itunes:episode>13</itunes:episode>
      <podcast:episode>13</podcast:episode>
      <itunes:title>DSMBs in Adaptive Trials with Roger Lewis</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">4c02fee7-d351-4881-add6-a1b20f46cb88</guid>
      <link>https://share.transistor.fm/s/56986d2a</link>
      <description>
        <![CDATA[<p>In this episode of "In the Interim…", host Dr. Scott Berry is true to the name of the podcast, as he discusses the unblinded world of adaptive clinical trials alongside Dr. Roger Lewis, a renowned expert in both statistical science and clinical medicine. Together, they explore the critical role of Data Safety Monitoring Boards (DSMBs) in safeguarding trial integrity and participant safety specifically for adaptive trials. The discussion navigates the complexities and challenges faced by DSMBs, particularly in adaptive trial contexts, offering valuable insights for anyone involved in clinical trial science.</p><p>Key Highlights<br>• Overview of the fundamental role and responsibilities of DSMBs in clinical trials.<br>• Discussion on how DSMBs ensure scientific integrity and participant safety in adaptive trials.<br>• Differences in DSMB involvement between traditional and adaptive trial designs.<br>• The evolving skillset required for DSMB members in the context of complex, adaptive trials.<br>• Exploration of the critical collaboration between DSMBs and Statistical Analysis Committees.</p><p>Quotes<br>• "The DSMB is tasked with balancing efficacy and safety at a very fundamental level." — Roger Lewis<br>• "Adaptive trials expand the role of the DSMB to ensure trials are conducted as intended." — Roger Lewis<br>• "The DSMB needs to review efficacy and safety to appropriately balance them." — Roger Lewis</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of "In the Interim…", host Dr. Scott Berry is true to the name of the podcast, as he discusses the unblinded world of adaptive clinical trials alongside Dr. Roger Lewis, a renowned expert in both statistical science and clinical medicine. Together, they explore the critical role of Data Safety Monitoring Boards (DSMBs) in safeguarding trial integrity and participant safety specifically for adaptive trials. The discussion navigates the complexities and challenges faced by DSMBs, particularly in adaptive trial contexts, offering valuable insights for anyone involved in clinical trial science.</p><p>Key Highlights<br>• Overview of the fundamental role and responsibilities of DSMBs in clinical trials.<br>• Discussion on how DSMBs ensure scientific integrity and participant safety in adaptive trials.<br>• Differences in DSMB involvement between traditional and adaptive trial designs.<br>• The evolving skillset required for DSMB members in the context of complex, adaptive trials.<br>• Exploration of the critical collaboration between DSMBs and Statistical Analysis Committees.</p><p>Quotes<br>• "The DSMB is tasked with balancing efficacy and safety at a very fundamental level." — Roger Lewis<br>• "Adaptive trials expand the role of the DSMB to ensure trials are conducted as intended." — Roger Lewis<br>• "The DSMB needs to review efficacy and safety to appropriately balance them." — Roger Lewis</p>]]>
      </content:encoded>
      <pubDate>Mon, 19 May 2025 06:00:00 -0500</pubDate>
      <author>Berry</author>
      <enclosure url="https://media.transistor.fm/56986d2a/e605f3b8.mp3" length="36134649" type="audio/mpeg"/>
      <itunes:author>Berry</itunes:author>
      <itunes:duration>2256</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode of "In the Interim…", host Dr. Scott Berry is true to the name of the podcast, as he discusses the unblinded world of adaptive clinical trials alongside Dr. Roger Lewis, a renowned expert in both statistical science and clinical medicine. Together, they explore the critical role of Data Safety Monitoring Boards (DSMBs) in safeguarding trial integrity and participant safety specifically for adaptive trials. The discussion navigates the complexities and challenges faced by DSMBs, particularly in adaptive trial contexts, offering valuable insights for anyone involved in clinical trial science.</p><p>Key Highlights<br>• Overview of the fundamental role and responsibilities of DSMBs in clinical trials.<br>• Discussion on how DSMBs ensure scientific integrity and participant safety in adaptive trials.<br>• Differences in DSMB involvement between traditional and adaptive trial designs.<br>• The evolving skillset required for DSMB members in the context of complex, adaptive trials.<br>• Exploration of the critical collaboration between DSMBs and Statistical Analysis Committees.</p><p>Quotes<br>• "The DSMB is tasked with balancing efficacy and safety at a very fundamental level." — Roger Lewis<br>• "Adaptive trials expand the role of the DSMB to ensure trials are conducted as intended." — Roger Lewis<br>• "The DSMB needs to review efficacy and safety to appropriately balance them." — Roger Lewis</p>]]>
      </itunes:summary>
      <itunes:keywords>statistical science, clinical trials</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/56986d2a/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/56986d2a/transcript.json" type="application/json"/>
    </item>
    <item>
      <title>Platform Trial in Psychiatry with Dr. Husseini Manji</title>
      <itunes:episode>12</itunes:episode>
      <podcast:episode>12</podcast:episode>
      <itunes:title>Platform Trial in Psychiatry with Dr. Husseini Manji</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">f25da3ba-0b51-4191-93da-57ce4a1f9bc9</guid>
      <link>https://share.transistor.fm/s/bf5728f5</link>
      <description>
        <![CDATA[<p>In the latest episode of "In the Interim…", Dr. Scott Berry and Dr. Mike Krams sit down with Dr. Husseini Manji, to explore the potential of platform trials in advancing precision medicine within psychiatry. Listen as we discuss how an adaptive platform trial could transform drug development, paving the way for breakthroughs in understanding and treating psychiatric disorders.</p><p><strong>Key Highlights</strong>:</p><ul><li>Overview of the burden of serious mental illness and the pressing need for innovative treatment approaches.</li><li>Discussion on precision psychiatry and the potential of a platform trial to address the heterogeneity of psychiatric disorders.</li><li>Insights into the advantages of biomarker-based adaptive trials in improving drug development success rates.</li><li>Examination of potential sponsorship models for platform trials, emphasizing patient and industry collaboration.</li></ul><p><strong>Quotes</strong>:</p><ul><li>"Mental illnesses represent a significant global challenge with a staggering unmet need." – Husseini Manji</li><li>"There's a real excitement about precision psychiatry—moving away from a one-size-fits-all approach." – Husseini Manji</li><li>"The patient perspective is crucial for driving significant advances in psychiatric treatment." – Mike Krams</li><li>"We believe that precision medicine biomarker-based adaptive trials could be game-changing in this space." – Husseini Manji</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In the latest episode of "In the Interim…", Dr. Scott Berry and Dr. Mike Krams sit down with Dr. Husseini Manji, to explore the potential of platform trials in advancing precision medicine within psychiatry. Listen as we discuss how an adaptive platform trial could transform drug development, paving the way for breakthroughs in understanding and treating psychiatric disorders.</p><p><strong>Key Highlights</strong>:</p><ul><li>Overview of the burden of serious mental illness and the pressing need for innovative treatment approaches.</li><li>Discussion on precision psychiatry and the potential of a platform trial to address the heterogeneity of psychiatric disorders.</li><li>Insights into the advantages of biomarker-based adaptive trials in improving drug development success rates.</li><li>Examination of potential sponsorship models for platform trials, emphasizing patient and industry collaboration.</li></ul><p><strong>Quotes</strong>:</p><ul><li>"Mental illnesses represent a significant global challenge with a staggering unmet need." – Husseini Manji</li><li>"There's a real excitement about precision psychiatry—moving away from a one-size-fits-all approach." – Husseini Manji</li><li>"The patient perspective is crucial for driving significant advances in psychiatric treatment." – Mike Krams</li><li>"We believe that precision medicine biomarker-based adaptive trials could be game-changing in this space." – Husseini Manji</li></ul>]]>
      </content:encoded>
      <pubDate>Mon, 12 May 2025 06:00:00 -0500</pubDate>
      <author>Berry</author>
      <enclosure url="https://media.transistor.fm/bf5728f5/4fce6cea.mp3" length="37674836" type="audio/mpeg"/>
      <itunes:author>Berry</itunes:author>
      <itunes:duration>2352</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In the latest episode of "In the Interim…", Dr. Scott Berry and Dr. Mike Krams sit down with Dr. Husseini Manji, to explore the potential of platform trials in advancing precision medicine within psychiatry. Listen as we discuss how an adaptive platform trial could transform drug development, paving the way for breakthroughs in understanding and treating psychiatric disorders.</p><p><strong>Key Highlights</strong>:</p><ul><li>Overview of the burden of serious mental illness and the pressing need for innovative treatment approaches.</li><li>Discussion on precision psychiatry and the potential of a platform trial to address the heterogeneity of psychiatric disorders.</li><li>Insights into the advantages of biomarker-based adaptive trials in improving drug development success rates.</li><li>Examination of potential sponsorship models for platform trials, emphasizing patient and industry collaboration.</li></ul><p><strong>Quotes</strong>:</p><ul><li>"Mental illnesses represent a significant global challenge with a staggering unmet need." – Husseini Manji</li><li>"There's a real excitement about precision psychiatry—moving away from a one-size-fits-all approach." – Husseini Manji</li><li>"The patient perspective is crucial for driving significant advances in psychiatric treatment." – Mike Krams</li><li>"We believe that precision medicine biomarker-based adaptive trials could be game-changing in this space." – Husseini Manji</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>statistical science, clinical trials</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/bf5728f5/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/bf5728f5/transcript.json" type="application/json"/>
    </item>
    <item>
      <title>Implementing Adaptive Trials</title>
      <itunes:episode>11</itunes:episode>
      <podcast:episode>11</podcast:episode>
      <itunes:title>Implementing Adaptive Trials</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">7b36ddc0-0b4b-49dc-a8ab-601804f6058b</guid>
      <link>https://share.transistor.fm/s/ec17dbef</link>
      <description>
        <![CDATA[<p>In Episode 11 of "In the Interim…", we discuss the nuances of implementing adaptive clinical trials with Dr. Anna McGlothlin and Dr. Michelle Detry from Berry Consultants. Both Anna and Michelle, seasoned Directors and Senior Statistical Scientists, shed light on the critical role their team plays in innovative adaptive clinical trials. They describe the frequent challenges and highlight the importance of high-quality trial implementation to ensure accurate and reliable outcomes, making this episode a must-listen for anyone involved in clinical trials.</p><p><strong>Key Highlights</strong>:</p><ul><li>Insight into the statistical implementation of adaptive clinical trials.</li><li>Logistics of data handling, to running the statistical model, to interactions with Data and Safety Monitoring Boards (DSMBs).</li><li>Preparatory steps required before an adaptive analysis, ensuring the pre-specified design is adhered to and carried out as planned.</li><li>The importance of understanding data in real-time and dealing with interim data idiosyncrasies.</li></ul><p><br><strong>Quotes</strong>:</p><ul><li>"We want the adaptive part of the trial to be invisible to sites—analyses might happen in the background without interference." – Scott Berry</li><li>"Our goal is five business days from when we receive the data to when we send the result to the DSMB." – Michelle Detry</li><li>"We always want to make sure that we have time, not just to hit a button and run an analysis and spit out a table, but to think and make sure that the results we’re producing make sense." – Anna McGlothlin</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In Episode 11 of "In the Interim…", we discuss the nuances of implementing adaptive clinical trials with Dr. Anna McGlothlin and Dr. Michelle Detry from Berry Consultants. Both Anna and Michelle, seasoned Directors and Senior Statistical Scientists, shed light on the critical role their team plays in innovative adaptive clinical trials. They describe the frequent challenges and highlight the importance of high-quality trial implementation to ensure accurate and reliable outcomes, making this episode a must-listen for anyone involved in clinical trials.</p><p><strong>Key Highlights</strong>:</p><ul><li>Insight into the statistical implementation of adaptive clinical trials.</li><li>Logistics of data handling, to running the statistical model, to interactions with Data and Safety Monitoring Boards (DSMBs).</li><li>Preparatory steps required before an adaptive analysis, ensuring the pre-specified design is adhered to and carried out as planned.</li><li>The importance of understanding data in real-time and dealing with interim data idiosyncrasies.</li></ul><p><br><strong>Quotes</strong>:</p><ul><li>"We want the adaptive part of the trial to be invisible to sites—analyses might happen in the background without interference." – Scott Berry</li><li>"Our goal is five business days from when we receive the data to when we send the result to the DSMB." – Michelle Detry</li><li>"We always want to make sure that we have time, not just to hit a button and run an analysis and spit out a table, but to think and make sure that the results we’re producing make sense." – Anna McGlothlin</li></ul>]]>
      </content:encoded>
      <pubDate>Mon, 05 May 2025 06:00:00 -0500</pubDate>
      <author>Berry</author>
      <enclosure url="https://media.transistor.fm/ec17dbef/830bb82b.mp3" length="39796373" type="audio/mpeg"/>
      <itunes:author>Berry</itunes:author>
      <itunes:duration>2485</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In Episode 11 of "In the Interim…", we discuss the nuances of implementing adaptive clinical trials with Dr. Anna McGlothlin and Dr. Michelle Detry from Berry Consultants. Both Anna and Michelle, seasoned Directors and Senior Statistical Scientists, shed light on the critical role their team plays in innovative adaptive clinical trials. They describe the frequent challenges and highlight the importance of high-quality trial implementation to ensure accurate and reliable outcomes, making this episode a must-listen for anyone involved in clinical trials.</p><p><strong>Key Highlights</strong>:</p><ul><li>Insight into the statistical implementation of adaptive clinical trials.</li><li>Logistics of data handling, to running the statistical model, to interactions with Data and Safety Monitoring Boards (DSMBs).</li><li>Preparatory steps required before an adaptive analysis, ensuring the pre-specified design is adhered to and carried out as planned.</li><li>The importance of understanding data in real-time and dealing with interim data idiosyncrasies.</li></ul><p><br><strong>Quotes</strong>:</p><ul><li>"We want the adaptive part of the trial to be invisible to sites—analyses might happen in the background without interference." – Scott Berry</li><li>"Our goal is five business days from when we receive the data to when we send the result to the DSMB." – Michelle Detry</li><li>"We always want to make sure that we have time, not just to hit a button and run an analysis and spit out a table, but to think and make sure that the results we’re producing make sense." – Anna McGlothlin</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>statistical science, clinical trials</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/ec17dbef/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/ec17dbef/transcript.json" type="application/json"/>
    </item>
    <item>
      <title>Revisiting Seamless 2/3 Trial for GLP-1 Agonist</title>
      <itunes:episode>10</itunes:episode>
      <podcast:episode>10</podcast:episode>
      <itunes:title>Revisiting Seamless 2/3 Trial for GLP-1 Agonist</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">f74722ec-2f0c-4eb1-9e8a-40f5c22e1d19</guid>
      <link>https://share.transistor.fm/s/0ef0f6e6</link>
      <description>
        <![CDATA[<p>In this episode of "In the Interim..." we revisit the ground-breaking seamless phase 2/3 clinical trial for the GLP-1 agonist, dulaglutide—better known as Trulicity. We discuss the intricacies of the adaptive trial design, and the unique features that helped expedite development by 12-18 months. Listeners will gain insight into how Bayesian algorithms and innovative statistical methods were pivotal in navigating a complex trial design, benefiting Eli Lilly's pipeline and changing the landscape of diabetes treatment.</p><p><strong>Key Highlights</strong>:</p><ul><li>Outline of the trial design and the barriers faced during its inception in 2007-2008.</li><li>Explanation of the Clinical Utility Index and its role in adaptive randomization.</li><li>The DSMB's role and interaction with Bayesian decision-making models.</li><li>Simulation-based design to optimize development efficiencies.</li><li>Insights into the predictive power of the trial on weight loss outcomes in subsequent trials.</li></ul><p><br><strong>Quotes</strong>:</p><ul><li>"The trial was run entirely by Bayesian algorithms." – Scott Berry</li><li>"They believed this utility function was absolutely the right way to go forward." – Scott Berry</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of "In the Interim..." we revisit the ground-breaking seamless phase 2/3 clinical trial for the GLP-1 agonist, dulaglutide—better known as Trulicity. We discuss the intricacies of the adaptive trial design, and the unique features that helped expedite development by 12-18 months. Listeners will gain insight into how Bayesian algorithms and innovative statistical methods were pivotal in navigating a complex trial design, benefiting Eli Lilly's pipeline and changing the landscape of diabetes treatment.</p><p><strong>Key Highlights</strong>:</p><ul><li>Outline of the trial design and the barriers faced during its inception in 2007-2008.</li><li>Explanation of the Clinical Utility Index and its role in adaptive randomization.</li><li>The DSMB's role and interaction with Bayesian decision-making models.</li><li>Simulation-based design to optimize development efficiencies.</li><li>Insights into the predictive power of the trial on weight loss outcomes in subsequent trials.</li></ul><p><br><strong>Quotes</strong>:</p><ul><li>"The trial was run entirely by Bayesian algorithms." – Scott Berry</li><li>"They believed this utility function was absolutely the right way to go forward." – Scott Berry</li></ul>]]>
      </content:encoded>
      <pubDate>Mon, 28 Apr 2025 06:00:00 -0500</pubDate>
      <author>Berry</author>
      <enclosure url="https://media.transistor.fm/0ef0f6e6/2a99178d.mp3" length="40954138" type="audio/mpeg"/>
      <itunes:author>Berry</itunes:author>
      <itunes:duration>2557</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode of "In the Interim..." we revisit the ground-breaking seamless phase 2/3 clinical trial for the GLP-1 agonist, dulaglutide—better known as Trulicity. We discuss the intricacies of the adaptive trial design, and the unique features that helped expedite development by 12-18 months. Listeners will gain insight into how Bayesian algorithms and innovative statistical methods were pivotal in navigating a complex trial design, benefiting Eli Lilly's pipeline and changing the landscape of diabetes treatment.</p><p><strong>Key Highlights</strong>:</p><ul><li>Outline of the trial design and the barriers faced during its inception in 2007-2008.</li><li>Explanation of the Clinical Utility Index and its role in adaptive randomization.</li><li>The DSMB's role and interaction with Bayesian decision-making models.</li><li>Simulation-based design to optimize development efficiencies.</li><li>Insights into the predictive power of the trial on weight loss outcomes in subsequent trials.</li></ul><p><br><strong>Quotes</strong>:</p><ul><li>"The trial was run entirely by Bayesian algorithms." – Scott Berry</li><li>"They believed this utility function was absolutely the right way to go forward." – Scott Berry</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>statistical science, clinical trials</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Guest" href="https://www.berryconsultants.com/" img="https://img.transistorcdn.com/sMT7m6cLBxpBe68Y93f4thTn4HeRQul45USdMF7yR40/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82ZTBl/ZjBlMmFkZjU3NjYx/OTI0MmYzY2E0NWQ0/OTIyMC5wbmc.jpg">Don Berry</podcast:person>
      <podcast:person role="Host" href="https://www.berryconsultants.com/" img="https://img.transistorcdn.com/D3ZU3jufor08z5PmBAIexD7RKnvumbHpcogC2R-EUbg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82ZmRl/OTVkMzNmNTNkYjQz/MjI2MmZjMzk0Y2I0/NmE2MC5wbmc.jpg">Scott Berry</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/0ef0f6e6/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/0ef0f6e6/transcript.json" type="application/json"/>
    </item>
    <item>
      <title>I-SPY 2 to GBM AGILE and Beyond</title>
      <itunes:episode>9</itunes:episode>
      <podcast:episode>9</podcast:episode>
      <itunes:title>I-SPY 2 to GBM AGILE and Beyond</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">13e03ea4-54c0-4e1a-98d9-a346c01bc341</guid>
      <link>https://share.transistor.fm/s/441087aa</link>
      <description>
        <![CDATA[<p>In this episode of "In the Interim...," we sit down with Dr. Meredith Buxton to explore the evolution of platform trials from I-SPY 2 to GBM AGILE and Beyond. With a rich history in innovative trial design, Meredith shares the journey from pioneering adaptive clinical trials in breast cancer with I-SPY 2 to her current role at the Global Coalition for Adaptive Research (GCAR). This conversation offers insights into accelerating clinical trial timelines, innovative operational frameworks, and their applications across multiple medical domains, making it a must-listen for anyone involved in clinical development and platform trials.</p><p>Key Highlights:</p><p>• Meredith Buxton discusses the origins and groundbreaking operations of the I-SPY platform in breast cancer.<br>• Exploration of how the I-SPY 2 model inspired subsequent platform trials in diverse areas such as glioblastoma and COVID-19.<br>• GCAR's role as a non-profit entity to foster adaptive trial designs and Meredith’s influential contributions to its formation and success.<br>• Discover the operational complexities and regulatory considerations essential for modern platform trials.<br>• Insights into Meredith’s vision for the future of drug development and the ongoing necessity for innovation in trial design.</p><p>Quotes:</p><p>• “The ideas of this are groundbreaking in many ways.” – Scott Berry<br>• "The I-SPY2 model could be replicated in other spaces." – Meredith Buxton</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of "In the Interim...," we sit down with Dr. Meredith Buxton to explore the evolution of platform trials from I-SPY 2 to GBM AGILE and Beyond. With a rich history in innovative trial design, Meredith shares the journey from pioneering adaptive clinical trials in breast cancer with I-SPY 2 to her current role at the Global Coalition for Adaptive Research (GCAR). This conversation offers insights into accelerating clinical trial timelines, innovative operational frameworks, and their applications across multiple medical domains, making it a must-listen for anyone involved in clinical development and platform trials.</p><p>Key Highlights:</p><p>• Meredith Buxton discusses the origins and groundbreaking operations of the I-SPY platform in breast cancer.<br>• Exploration of how the I-SPY 2 model inspired subsequent platform trials in diverse areas such as glioblastoma and COVID-19.<br>• GCAR's role as a non-profit entity to foster adaptive trial designs and Meredith’s influential contributions to its formation and success.<br>• Discover the operational complexities and regulatory considerations essential for modern platform trials.<br>• Insights into Meredith’s vision for the future of drug development and the ongoing necessity for innovation in trial design.</p><p>Quotes:</p><p>• “The ideas of this are groundbreaking in many ways.” – Scott Berry<br>• "The I-SPY2 model could be replicated in other spaces." – Meredith Buxton</p>]]>
      </content:encoded>
      <pubDate>Mon, 21 Apr 2025 06:00:00 -0500</pubDate>
      <author>Berry</author>
      <enclosure url="https://media.transistor.fm/441087aa/c8a80894.mp3" length="32138944" type="audio/mpeg"/>
      <itunes:author>Berry</itunes:author>
      <itunes:duration>2006</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode of "In the Interim...," we sit down with Dr. Meredith Buxton to explore the evolution of platform trials from I-SPY 2 to GBM AGILE and Beyond. With a rich history in innovative trial design, Meredith shares the journey from pioneering adaptive clinical trials in breast cancer with I-SPY 2 to her current role at the Global Coalition for Adaptive Research (GCAR). This conversation offers insights into accelerating clinical trial timelines, innovative operational frameworks, and their applications across multiple medical domains, making it a must-listen for anyone involved in clinical development and platform trials.</p><p>Key Highlights:</p><p>• Meredith Buxton discusses the origins and groundbreaking operations of the I-SPY platform in breast cancer.<br>• Exploration of how the I-SPY 2 model inspired subsequent platform trials in diverse areas such as glioblastoma and COVID-19.<br>• GCAR's role as a non-profit entity to foster adaptive trial designs and Meredith’s influential contributions to its formation and success.<br>• Discover the operational complexities and regulatory considerations essential for modern platform trials.<br>• Insights into Meredith’s vision for the future of drug development and the ongoing necessity for innovation in trial design.</p><p>Quotes:</p><p>• “The ideas of this are groundbreaking in many ways.” – Scott Berry<br>• "The I-SPY2 model could be replicated in other spaces." – Meredith Buxton</p>]]>
      </itunes:summary>
      <itunes:keywords>statistical science, clinical trials</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.berryconsultants.com/" img="https://img.transistorcdn.com/D3ZU3jufor08z5PmBAIexD7RKnvumbHpcogC2R-EUbg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82ZmRl/OTVkMzNmNTNkYjQz/MjI2MmZjMzk0Y2I0/NmE2MC5wbmc.jpg">Scott Berry</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/441087aa/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/441087aa/transcript.json" type="application/json"/>
    </item>
    <item>
      <title>External Data in Clinical Trials</title>
      <itunes:episode>8</itunes:episode>
      <podcast:episode>8</podcast:episode>
      <itunes:title>External Data in Clinical Trials</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">100ad58a-3b39-4abd-bd61-d9d40774c0e0</guid>
      <link>https://share.transistor.fm/s/620729e9</link>
      <description>
        <![CDATA[<p>In this episode of "In the Interim...," Scott Berry and Kert Viele navigate the nuanced debate surrounding the integration of external data in clinical trials. Discover the implications and potential benefits and pitfalls of leveraging historical and real-world evidence in the analysis of clinical trials. </p><p><strong>Key Highlights</strong>:</p><p>• Exploration of how external data can influence clinical trial analyses and the inherent risks versus rewards.<br>• Examination of the frequentist versus Bayesian perspectives on data integration.<br>• Discussion of real-world cases where external data has been used.<br>• Debate on the conservative nature of current scientific approaches and how they may hinder progress.<br>• Insight into the future of clinical trials harnessing external data – a step towards better medical science.</p><p><strong>Quotes</strong>:</p><ul><li>"If prior data isn't generally leading us in the right direction, we ought to reconsider the basis of scientific inquiry." – Kert Viele</li><li>"The industry's hesitance to use existing data slows innovation and limits our ability to bring effective treatments to market." – Scott Berry</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of "In the Interim...," Scott Berry and Kert Viele navigate the nuanced debate surrounding the integration of external data in clinical trials. Discover the implications and potential benefits and pitfalls of leveraging historical and real-world evidence in the analysis of clinical trials. </p><p><strong>Key Highlights</strong>:</p><p>• Exploration of how external data can influence clinical trial analyses and the inherent risks versus rewards.<br>• Examination of the frequentist versus Bayesian perspectives on data integration.<br>• Discussion of real-world cases where external data has been used.<br>• Debate on the conservative nature of current scientific approaches and how they may hinder progress.<br>• Insight into the future of clinical trials harnessing external data – a step towards better medical science.</p><p><strong>Quotes</strong>:</p><ul><li>"If prior data isn't generally leading us in the right direction, we ought to reconsider the basis of scientific inquiry." – Kert Viele</li><li>"The industry's hesitance to use existing data slows innovation and limits our ability to bring effective treatments to market." – Scott Berry</li></ul>]]>
      </content:encoded>
      <pubDate>Mon, 14 Apr 2025 03:00:00 -0500</pubDate>
      <author>Berry</author>
      <enclosure url="https://media.transistor.fm/620729e9/cbb61dbf.mp3" length="27474521" type="audio/mpeg"/>
      <itunes:author>Berry</itunes:author>
      <itunes:duration>1715</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode of "In the Interim...," Scott Berry and Kert Viele navigate the nuanced debate surrounding the integration of external data in clinical trials. Discover the implications and potential benefits and pitfalls of leveraging historical and real-world evidence in the analysis of clinical trials. </p><p><strong>Key Highlights</strong>:</p><p>• Exploration of how external data can influence clinical trial analyses and the inherent risks versus rewards.<br>• Examination of the frequentist versus Bayesian perspectives on data integration.<br>• Discussion of real-world cases where external data has been used.<br>• Debate on the conservative nature of current scientific approaches and how they may hinder progress.<br>• Insight into the future of clinical trials harnessing external data – a step towards better medical science.</p><p><strong>Quotes</strong>:</p><ul><li>"If prior data isn't generally leading us in the right direction, we ought to reconsider the basis of scientific inquiry." – Kert Viele</li><li>"The industry's hesitance to use existing data slows innovation and limits our ability to bring effective treatments to market." – Scott Berry</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>statistical science, clinical trials</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.berryconsultants.com/" img="https://img.transistorcdn.com/D3ZU3jufor08z5PmBAIexD7RKnvumbHpcogC2R-EUbg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82ZmRl/OTVkMzNmNTNkYjQz/MjI2MmZjMzk0Y2I0/NmE2MC5wbmc.jpg">Scott Berry</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/620729e9/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/620729e9/transcript.json" type="application/json"/>
    </item>
    <item>
      <title>Remembering Jimmie Savage</title>
      <itunes:episode>7</itunes:episode>
      <podcast:episode>7</podcast:episode>
      <itunes:title>Remembering Jimmie Savage</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">48b3d591-dcd9-419f-84fc-6ee5de9e253f</guid>
      <link>https://share.transistor.fm/s/9a04de86</link>
      <description>
        <![CDATA[<p>In this episode of "In the Interim," Don Berry shares the life and work of Jimmie Savage, his advisor and a legendary figure in Bayesian statistics. Hosted by Scott Berry, the discussion reveals the personal and professional experiences that shaped Savage's groundbreaking contributions. Discover the intricacies of Savage's influence on statistical thought and his profound legacy, from his tragic childhood to a profound effect on Bayesian statistics and scientific thought.</p><p><strong>Key Highlights</strong>:</p><p>• Don Berry shares the personal story of Jimmie Savage's troubled childhood and how it influenced his work and personality.<br>• Insights into Savage's pioneering role as the father of modern Bayesian statistics.<br>• Discussion on Savage's varied interests and collaborations with figures like Milton Friedman and John von Neumann.<br>• Don Berry recounts his academic experiences alongside Savage and his own journey into clinical trial design.<br>• Exploration of Savage's legacy through his students and his axiomatic approach to subjective probability.</p><p><strong>Quotes</strong>:</p><p>• "I think he's the father of modern Bayesian statistics. How can you argue about that?" – Don Berry<br>• "The world around you when you're with Savage is tingling with intellect." – Don Berry<br>• "We probably wouldn't exist... if it were not for him." – Don Berry</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of "In the Interim," Don Berry shares the life and work of Jimmie Savage, his advisor and a legendary figure in Bayesian statistics. Hosted by Scott Berry, the discussion reveals the personal and professional experiences that shaped Savage's groundbreaking contributions. Discover the intricacies of Savage's influence on statistical thought and his profound legacy, from his tragic childhood to a profound effect on Bayesian statistics and scientific thought.</p><p><strong>Key Highlights</strong>:</p><p>• Don Berry shares the personal story of Jimmie Savage's troubled childhood and how it influenced his work and personality.<br>• Insights into Savage's pioneering role as the father of modern Bayesian statistics.<br>• Discussion on Savage's varied interests and collaborations with figures like Milton Friedman and John von Neumann.<br>• Don Berry recounts his academic experiences alongside Savage and his own journey into clinical trial design.<br>• Exploration of Savage's legacy through his students and his axiomatic approach to subjective probability.</p><p><strong>Quotes</strong>:</p><p>• "I think he's the father of modern Bayesian statistics. How can you argue about that?" – Don Berry<br>• "The world around you when you're with Savage is tingling with intellect." – Don Berry<br>• "We probably wouldn't exist... if it were not for him." – Don Berry</p>]]>
      </content:encoded>
      <pubDate>Mon, 07 Apr 2025 05:43:12 -0500</pubDate>
      <author>Berry</author>
      <enclosure url="https://media.transistor.fm/9a04de86/4c4c513b.mp3" length="37463738" type="audio/mpeg"/>
      <itunes:author>Berry</itunes:author>
      <itunes:duration>2339</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode of "In the Interim," Don Berry shares the life and work of Jimmie Savage, his advisor and a legendary figure in Bayesian statistics. Hosted by Scott Berry, the discussion reveals the personal and professional experiences that shaped Savage's groundbreaking contributions. Discover the intricacies of Savage's influence on statistical thought and his profound legacy, from his tragic childhood to a profound effect on Bayesian statistics and scientific thought.</p><p><strong>Key Highlights</strong>:</p><p>• Don Berry shares the personal story of Jimmie Savage's troubled childhood and how it influenced his work and personality.<br>• Insights into Savage's pioneering role as the father of modern Bayesian statistics.<br>• Discussion on Savage's varied interests and collaborations with figures like Milton Friedman and John von Neumann.<br>• Don Berry recounts his academic experiences alongside Savage and his own journey into clinical trial design.<br>• Exploration of Savage's legacy through his students and his axiomatic approach to subjective probability.</p><p><strong>Quotes</strong>:</p><p>• "I think he's the father of modern Bayesian statistics. How can you argue about that?" – Don Berry<br>• "The world around you when you're with Savage is tingling with intellect." – Don Berry<br>• "We probably wouldn't exist... if it were not for him." – Don Berry</p>]]>
      </itunes:summary>
      <itunes:keywords>statistical science, clinical trials</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Guest" href="https://www.berryconsultants.com/" img="https://img.transistorcdn.com/sMT7m6cLBxpBe68Y93f4thTn4HeRQul45USdMF7yR40/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82ZTBl/ZjBlMmFkZjU3NjYx/OTI0MmYzY2E0NWQ0/OTIyMC5wbmc.jpg">Don Berry</podcast:person>
      <podcast:person role="Host" href="https://www.berryconsultants.com/" img="https://img.transistorcdn.com/D3ZU3jufor08z5PmBAIexD7RKnvumbHpcogC2R-EUbg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82ZmRl/OTVkMzNmNTNkYjQz/MjI2MmZjMzk0Y2I0/NmE2MC5wbmc.jpg">Scott Berry</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/9a04de86/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/9a04de86/transcript.json" type="application/json"/>
    </item>
    <item>
      <title>The Art and Slog of Innovating</title>
      <itunes:episode>6</itunes:episode>
      <podcast:episode>6</podcast:episode>
      <itunes:title>The Art and Slog of Innovating</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">30438439-30fd-4a2e-8bf7-b4576360b0f4</guid>
      <link>https://share.transistor.fm/s/d4f7bc1e</link>
      <description>
        <![CDATA[<p>In this compelling episode of "In the Interim," Dr. Mike Krams, a seasoned expert in clinical trials and drug development, joins us to discuss the art and slog of innovation in pharmaceutical companies. With over 30 years in the field, Dr. Krams shares insights on leveraging Bayesian statistics and innovative designs to transform development approaches. The conversation explores disruptive approaches to drug development, the importance of having champions for change, and the future of innovation in clinical trials. Mike highlights the necessity of integrating strategic decision-making with statistical expertise to enhance the efficiency and effectiveness of clinical trials.</p><p><strong>Key Highlights</strong>:</p><ul><li>Discussion on the ASTIN Stroke Trial, a groundbreaking experience with Bayesian methodology in drug trials.</li><li>Examination of how adaptive designs can lead to more efficient learning processes in clinical research.</li><li>Exploration of the cultural and strategic challenges of bringing innovative trial designs to conservative pharmaceutical environments.</li><li>Insight into the vital role of having internal champions to advocate for change and innovation.</li><li>The importance of integrating strategic thinking with statistical expertise to drive innovation forward.</li></ul><p><strong>Quotes</strong>:</p><ul><li>"Respect is earned, but very good communication skills are a necessary condition for implementing innovation." – Mike Krams</li><li>"Innovation for innovation's sake is not the goal; it's about making better decisions for future patients." – Mike Krams</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this compelling episode of "In the Interim," Dr. Mike Krams, a seasoned expert in clinical trials and drug development, joins us to discuss the art and slog of innovation in pharmaceutical companies. With over 30 years in the field, Dr. Krams shares insights on leveraging Bayesian statistics and innovative designs to transform development approaches. The conversation explores disruptive approaches to drug development, the importance of having champions for change, and the future of innovation in clinical trials. Mike highlights the necessity of integrating strategic decision-making with statistical expertise to enhance the efficiency and effectiveness of clinical trials.</p><p><strong>Key Highlights</strong>:</p><ul><li>Discussion on the ASTIN Stroke Trial, a groundbreaking experience with Bayesian methodology in drug trials.</li><li>Examination of how adaptive designs can lead to more efficient learning processes in clinical research.</li><li>Exploration of the cultural and strategic challenges of bringing innovative trial designs to conservative pharmaceutical environments.</li><li>Insight into the vital role of having internal champions to advocate for change and innovation.</li><li>The importance of integrating strategic thinking with statistical expertise to drive innovation forward.</li></ul><p><strong>Quotes</strong>:</p><ul><li>"Respect is earned, but very good communication skills are a necessary condition for implementing innovation." – Mike Krams</li><li>"Innovation for innovation's sake is not the goal; it's about making better decisions for future patients." – Mike Krams</li></ul>]]>
      </content:encoded>
      <pubDate>Mon, 31 Mar 2025 03:00:00 -0500</pubDate>
      <author>Berry</author>
      <enclosure url="https://media.transistor.fm/d4f7bc1e/240149c2.mp3" length="26836295" type="audio/mpeg"/>
      <itunes:author>Berry</itunes:author>
      <itunes:duration>1675</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this compelling episode of "In the Interim," Dr. Mike Krams, a seasoned expert in clinical trials and drug development, joins us to discuss the art and slog of innovation in pharmaceutical companies. With over 30 years in the field, Dr. Krams shares insights on leveraging Bayesian statistics and innovative designs to transform development approaches. The conversation explores disruptive approaches to drug development, the importance of having champions for change, and the future of innovation in clinical trials. Mike highlights the necessity of integrating strategic decision-making with statistical expertise to enhance the efficiency and effectiveness of clinical trials.</p><p><strong>Key Highlights</strong>:</p><ul><li>Discussion on the ASTIN Stroke Trial, a groundbreaking experience with Bayesian methodology in drug trials.</li><li>Examination of how adaptive designs can lead to more efficient learning processes in clinical research.</li><li>Exploration of the cultural and strategic challenges of bringing innovative trial designs to conservative pharmaceutical environments.</li><li>Insight into the vital role of having internal champions to advocate for change and innovation.</li><li>The importance of integrating strategic thinking with statistical expertise to drive innovation forward.</li></ul><p><strong>Quotes</strong>:</p><ul><li>"Respect is earned, but very good communication skills are a necessary condition for implementing innovation." – Mike Krams</li><li>"Innovation for innovation's sake is not the goal; it's about making better decisions for future patients." – Mike Krams</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>statistical science, clinical trials</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.berryconsultants.com/" img="https://img.transistorcdn.com/D3ZU3jufor08z5PmBAIexD7RKnvumbHpcogC2R-EUbg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82ZmRl/OTVkMzNmNTNkYjQz/MjI2MmZjMzk0Y2I0/NmE2MC5wbmc.jpg">Scott Berry</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/d4f7bc1e/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/d4f7bc1e/transcript.json" type="application/json"/>
    </item>
    <item>
      <title>Religion, Politics, and Ordinal Outcomes</title>
      <itunes:episode>5</itunes:episode>
      <podcast:episode>5</podcast:episode>
      <itunes:title>Religion, Politics, and Ordinal Outcomes</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">f5505bc8-65fe-4403-a308-73050032e7c5</guid>
      <link>https://share.transistor.fm/s/e0f354ae</link>
      <description>
        <![CDATA[<p>In this episode of "In the Interim," Dr. Scott Berry discusses the vital topic of ordinal outcomes in clinical trials—a subject as controversial as politics and religion at the dinner table. Using historical examples like James Lind's 1747 scurvy trial and Austin Bradford Hill’s pioneering randomized trial, the episode explores the complexities and ongoing debates about analyzing ordinal endpoints. Berry challenges conventional analysis methods and advocates for more refined, explicit approaches, delivering valuable insights for statisticians, clinicians, and anyone involved in clinical trial designs.</p><p>Key Highlights</p><p>• Examination of the historical context of ordinal outcomes, starting with James Lind's 1747 scurvy trial.<br>• Discussion of the first randomized human clinical trial by Austin Bradford Hill and its implications for ordinal endpoint analysis.<br>• Exploration of the Modified Rankin Score as a current example of ordinal outcomes in stroke trials.<br>• Critique of conventional methods like dichotomization and proportional odds models for analyzing ordinal data.<br>• Argument for adopting utility-based approaches in clinical trial analysis for meaningful outcomes.</p><p>Quotes</p><p>• "Almost every endpoint is ordinal. So you can't escape this." – Dr. Scott Berry<br>• "My claim is nobody has that weight. But yet, it's very commonly done." – Dr. Scott Berry<br>• "Hiding behind ad hoc ways to do this, I think just leads us to bad places." – Dr. Scott Berry</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of "In the Interim," Dr. Scott Berry discusses the vital topic of ordinal outcomes in clinical trials—a subject as controversial as politics and religion at the dinner table. Using historical examples like James Lind's 1747 scurvy trial and Austin Bradford Hill’s pioneering randomized trial, the episode explores the complexities and ongoing debates about analyzing ordinal endpoints. Berry challenges conventional analysis methods and advocates for more refined, explicit approaches, delivering valuable insights for statisticians, clinicians, and anyone involved in clinical trial designs.</p><p>Key Highlights</p><p>• Examination of the historical context of ordinal outcomes, starting with James Lind's 1747 scurvy trial.<br>• Discussion of the first randomized human clinical trial by Austin Bradford Hill and its implications for ordinal endpoint analysis.<br>• Exploration of the Modified Rankin Score as a current example of ordinal outcomes in stroke trials.<br>• Critique of conventional methods like dichotomization and proportional odds models for analyzing ordinal data.<br>• Argument for adopting utility-based approaches in clinical trial analysis for meaningful outcomes.</p><p>Quotes</p><p>• "Almost every endpoint is ordinal. So you can't escape this." – Dr. Scott Berry<br>• "My claim is nobody has that weight. But yet, it's very commonly done." – Dr. Scott Berry<br>• "Hiding behind ad hoc ways to do this, I think just leads us to bad places." – Dr. Scott Berry</p>]]>
      </content:encoded>
      <pubDate>Mon, 24 Mar 2025 03:00:00 -0500</pubDate>
      <author>Berry</author>
      <enclosure url="https://media.transistor.fm/e0f354ae/ee140835.mp3" length="29127975" type="audio/mpeg"/>
      <itunes:author>Berry</itunes:author>
      <itunes:duration>1818</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode of "In the Interim," Dr. Scott Berry discusses the vital topic of ordinal outcomes in clinical trials—a subject as controversial as politics and religion at the dinner table. Using historical examples like James Lind's 1747 scurvy trial and Austin Bradford Hill’s pioneering randomized trial, the episode explores the complexities and ongoing debates about analyzing ordinal endpoints. Berry challenges conventional analysis methods and advocates for more refined, explicit approaches, delivering valuable insights for statisticians, clinicians, and anyone involved in clinical trial designs.</p><p>Key Highlights</p><p>• Examination of the historical context of ordinal outcomes, starting with James Lind's 1747 scurvy trial.<br>• Discussion of the first randomized human clinical trial by Austin Bradford Hill and its implications for ordinal endpoint analysis.<br>• Exploration of the Modified Rankin Score as a current example of ordinal outcomes in stroke trials.<br>• Critique of conventional methods like dichotomization and proportional odds models for analyzing ordinal data.<br>• Argument for adopting utility-based approaches in clinical trial analysis for meaningful outcomes.</p><p>Quotes</p><p>• "Almost every endpoint is ordinal. So you can't escape this." – Dr. Scott Berry<br>• "My claim is nobody has that weight. But yet, it's very commonly done." – Dr. Scott Berry<br>• "Hiding behind ad hoc ways to do this, I think just leads us to bad places." – Dr. Scott Berry</p>]]>
      </itunes:summary>
      <itunes:keywords>statistical science, clinical trials</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.berryconsultants.com/" img="https://img.transistorcdn.com/D3ZU3jufor08z5PmBAIexD7RKnvumbHpcogC2R-EUbg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82ZmRl/OTVkMzNmNTNkYjQz/MjI2MmZjMzk0Y2I0/NmE2MC5wbmc.jpg">Scott Berry</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/e0f354ae/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/e0f354ae/transcript.json" type="application/json"/>
    </item>
    <item>
      <title>HEALEY ALS Platform Trial with Dr. Merit Cudkowicz and Dr. Melanie Quintana</title>
      <itunes:episode>4</itunes:episode>
      <podcast:episode>4</podcast:episode>
      <itunes:title>HEALEY ALS Platform Trial with Dr. Merit Cudkowicz and Dr. Melanie Quintana</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">b8ff4b49-1e87-413c-8d58-d96ffc926624</guid>
      <link>https://share.transistor.fm/s/c33a28f9</link>
      <description>
        <![CDATA[<p>In this episode of the podcast, we sit down with Dr. Merit Cudkowicz and Dr. Melanie Quintana to discuss the inception and execution of the Healy ALS Platform Trial, a revolutionary approach designed for efficiency and impactful data collection. With insights from both medical and statistical perspectives, this episode offers a comprehensive understanding of the trial's structure and outcomes, shedding light on its potential to reshape neuro-therapeutics research.</p><p><strong>Key Highlights</strong></p><ul><li>Dr. Merit Cudkowicz discusses the motivation behind adopting master platform trials for ALS and the collaboration that brought it to life.</li><li>Dr. Melanie Quintana explains the statistical design of the trial, emphasizing the sharing of control groups and Bayesian methods for efficiency.</li><li>Insights into the FDA's enthusiastic support and the iterative process to align on innovative statistical approaches.</li><li>The dual roles of the trial: significant patient engagement and industry collaboration as facilitating factors for successful trial implementation.</li><li>Discussion on the future adaptation of trial designs based on collected data and emerging biomarkers.</li></ul><p><strong>Quotes</strong></p><ul><li>"There was such an energy about your group... everybody had really done their homework." – Dr. Melanie Quintana</li><li>"ALS is a very complex disorder, and we actually did learn a lot." – Dr. Merit Cudkowicz</li><li>"We're adapting with learnings." – Dr. Merit Cudkowicz</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of the podcast, we sit down with Dr. Merit Cudkowicz and Dr. Melanie Quintana to discuss the inception and execution of the Healy ALS Platform Trial, a revolutionary approach designed for efficiency and impactful data collection. With insights from both medical and statistical perspectives, this episode offers a comprehensive understanding of the trial's structure and outcomes, shedding light on its potential to reshape neuro-therapeutics research.</p><p><strong>Key Highlights</strong></p><ul><li>Dr. Merit Cudkowicz discusses the motivation behind adopting master platform trials for ALS and the collaboration that brought it to life.</li><li>Dr. Melanie Quintana explains the statistical design of the trial, emphasizing the sharing of control groups and Bayesian methods for efficiency.</li><li>Insights into the FDA's enthusiastic support and the iterative process to align on innovative statistical approaches.</li><li>The dual roles of the trial: significant patient engagement and industry collaboration as facilitating factors for successful trial implementation.</li><li>Discussion on the future adaptation of trial designs based on collected data and emerging biomarkers.</li></ul><p><strong>Quotes</strong></p><ul><li>"There was such an energy about your group... everybody had really done their homework." – Dr. Melanie Quintana</li><li>"ALS is a very complex disorder, and we actually did learn a lot." – Dr. Merit Cudkowicz</li><li>"We're adapting with learnings." – Dr. Merit Cudkowicz</li></ul>]]>
      </content:encoded>
      <pubDate>Mon, 17 Mar 2025 04:00:00 -0500</pubDate>
      <author>Berry</author>
      <enclosure url="https://media.transistor.fm/c33a28f9/7273713f.mp3" length="26715132" type="audio/mpeg"/>
      <itunes:author>Berry</itunes:author>
      <itunes:duration>1667</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode of the podcast, we sit down with Dr. Merit Cudkowicz and Dr. Melanie Quintana to discuss the inception and execution of the Healy ALS Platform Trial, a revolutionary approach designed for efficiency and impactful data collection. With insights from both medical and statistical perspectives, this episode offers a comprehensive understanding of the trial's structure and outcomes, shedding light on its potential to reshape neuro-therapeutics research.</p><p><strong>Key Highlights</strong></p><ul><li>Dr. Merit Cudkowicz discusses the motivation behind adopting master platform trials for ALS and the collaboration that brought it to life.</li><li>Dr. Melanie Quintana explains the statistical design of the trial, emphasizing the sharing of control groups and Bayesian methods for efficiency.</li><li>Insights into the FDA's enthusiastic support and the iterative process to align on innovative statistical approaches.</li><li>The dual roles of the trial: significant patient engagement and industry collaboration as facilitating factors for successful trial implementation.</li><li>Discussion on the future adaptation of trial designs based on collected data and emerging biomarkers.</li></ul><p><strong>Quotes</strong></p><ul><li>"There was such an energy about your group... everybody had really done their homework." – Dr. Melanie Quintana</li><li>"ALS is a very complex disorder, and we actually did learn a lot." – Dr. Merit Cudkowicz</li><li>"We're adapting with learnings." – Dr. Merit Cudkowicz</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>statistical science, clinical trials</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.berryconsultants.com/" img="https://img.transistorcdn.com/D3ZU3jufor08z5PmBAIexD7RKnvumbHpcogC2R-EUbg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82ZmRl/OTVkMzNmNTNkYjQz/MjI2MmZjMzk0Y2I0/NmE2MC5wbmc.jpg">Scott Berry</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/c33a28f9/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/c33a28f9/transcript.json" type="application/json"/>
    </item>
    <item>
      <title>FACTS 7.1 Release with Tom Parke</title>
      <itunes:episode>3</itunes:episode>
      <podcast:episode>3</podcast:episode>
      <itunes:title>FACTS 7.1 Release with Tom Parke</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">4b5b96d9-00fd-41ce-b5b1-b1f66e9713a2</guid>
      <link>https://share.transistor.fm/s/41fba886</link>
      <description>
        <![CDATA[<p>In this episode of "In the Interim," Berry's Director of Software, Tom Parke, takes us into the fascinating realm of clinical trial simulation. With Tom joining from the UK, we discuss the intricacies and updates surrounding FACTS, a sophisticated clinical trial simulation software. Learn about its significance in designing adaptive trial designs and its latest enhancements with the release of FACTS 7.1. Discover the balance between expanding features and maintaining user simplicity, ensuring that both Berry Consultants and external users can innovate effectively.</p><p>Key Highlights:<br>• Introduction of FACTS 7.1, emphasizing enhancements in code quality and simulation capabilities.<br>• Discussion on the history and evolution of clinical trial simulators at Berry Consultants.<br>• Exploration of FACTS' new features, such as Bayesian predictive probabilities and phase one dose escalation improvements.<br>• Insight into the challenges of creating user-friendly software with extensive features for trial simulation.<br>• Plans for future developments, including wizards and enhanced design comparison tools.</p><p>Quotes:<br>• "FACTS turns trial design and statistics into a game—where you can explore and try different options." – Tom Parke<br>• "You're creating software that allows exploring designs you can't calculate an answer to." – Tom Parke<br>• "It's all about making sure the designers have the right tools to efficiently explore different trial designs." – Scott Berry</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of "In the Interim," Berry's Director of Software, Tom Parke, takes us into the fascinating realm of clinical trial simulation. With Tom joining from the UK, we discuss the intricacies and updates surrounding FACTS, a sophisticated clinical trial simulation software. Learn about its significance in designing adaptive trial designs and its latest enhancements with the release of FACTS 7.1. Discover the balance between expanding features and maintaining user simplicity, ensuring that both Berry Consultants and external users can innovate effectively.</p><p>Key Highlights:<br>• Introduction of FACTS 7.1, emphasizing enhancements in code quality and simulation capabilities.<br>• Discussion on the history and evolution of clinical trial simulators at Berry Consultants.<br>• Exploration of FACTS' new features, such as Bayesian predictive probabilities and phase one dose escalation improvements.<br>• Insight into the challenges of creating user-friendly software with extensive features for trial simulation.<br>• Plans for future developments, including wizards and enhanced design comparison tools.</p><p>Quotes:<br>• "FACTS turns trial design and statistics into a game—where you can explore and try different options." – Tom Parke<br>• "You're creating software that allows exploring designs you can't calculate an answer to." – Tom Parke<br>• "It's all about making sure the designers have the right tools to efficiently explore different trial designs." – Scott Berry</p>]]>
      </content:encoded>
      <pubDate>Mon, 10 Mar 2025 08:04:58 -0500</pubDate>
      <author>Berry</author>
      <enclosure url="https://media.transistor.fm/41fba886/c067ec36.mp3" length="26480196" type="audio/mpeg"/>
      <itunes:author>Berry</itunes:author>
      <itunes:duration>1653</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode of "In the Interim," Berry's Director of Software, Tom Parke, takes us into the fascinating realm of clinical trial simulation. With Tom joining from the UK, we discuss the intricacies and updates surrounding FACTS, a sophisticated clinical trial simulation software. Learn about its significance in designing adaptive trial designs and its latest enhancements with the release of FACTS 7.1. Discover the balance between expanding features and maintaining user simplicity, ensuring that both Berry Consultants and external users can innovate effectively.</p><p>Key Highlights:<br>• Introduction of FACTS 7.1, emphasizing enhancements in code quality and simulation capabilities.<br>• Discussion on the history and evolution of clinical trial simulators at Berry Consultants.<br>• Exploration of FACTS' new features, such as Bayesian predictive probabilities and phase one dose escalation improvements.<br>• Insight into the challenges of creating user-friendly software with extensive features for trial simulation.<br>• Plans for future developments, including wizards and enhanced design comparison tools.</p><p>Quotes:<br>• "FACTS turns trial design and statistics into a game—where you can explore and try different options." – Tom Parke<br>• "You're creating software that allows exploring designs you can't calculate an answer to." – Tom Parke<br>• "It's all about making sure the designers have the right tools to efficiently explore different trial designs." – Scott Berry</p>]]>
      </itunes:summary>
      <itunes:keywords>statistical science, clinical trials</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.berryconsultants.com/" img="https://img.transistorcdn.com/D3ZU3jufor08z5PmBAIexD7RKnvumbHpcogC2R-EUbg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82ZmRl/OTVkMzNmNTNkYjQz/MjI2MmZjMzk0Y2I0/NmE2MC5wbmc.jpg">Scott Berry</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/41fba886/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/41fba886/transcript.json" type="application/json"/>
    </item>
    <item>
      <title>When should you use adaptive design clinical trials?</title>
      <itunes:episode>2</itunes:episode>
      <podcast:episode>2</podcast:episode>
      <itunes:title>When should you use adaptive design clinical trials?</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">e6ed7a7d-6d0a-488e-94d8-be336cb2ddf1</guid>
      <link>https://share.transistor.fm/s/f3b6068f</link>
      <description>
        <![CDATA[<p>In this episode of "In the Interim," we consider the nuances of adaptive design clinical trials with distinguished guests <a href="https://www.berryconsultants.com/team-members/scott-berry">Dr. Scott Berry</a> and <a href="https://www.berryconsultants.com/team-members/kert-viele-phd">Dr. Kert Viele</a> from <a href="https://www.berryconsultants.com/">Berry Consultants</a>. The conversation centers around the vital question: when should these adaptive designs be implemented? Listeners will gain invaluable insights into the mechanics of adaptive trials, the Bayesian approach, and scenarios where these designs prove most effective. Whether you're involved in clinical research or simply intrigued by the evolution of clinical trials, this episode enriches your understanding with expert perspectives and practical examples.</p><p><strong>Key Highlights:</strong></p><ul><li>Dr. Scott Berry and Dr. Kert Viele discuss the core principles and benefits of adaptive design clinical trials.</li><li>A distinction is drawn between adaptive and fixed trials, showcasing the flexibility and efficiency of adaptive methods.</li><li>The speakers explore common adaptations, including sample size modifications and response adaptive randomization.</li><li>Strategies to handle anticipated regret and buyer's remorse in trial design are thoroughly examined.</li><li>The episode provides practical advice on identifying suitable scenarios for adaptive trials, emphasizing the importance of timely information.</li></ul><p><strong>Quotes:</strong></p><ul><li>"The promise of an adaptive trial is creating prospective changes based on the accumulating data." – Dr. Scott Berry</li><li>"If I knew enough to perfectly design my trial, I wouldn't need to run my trial because I already know what the answers are." – Dr. Kert Viele</li><li>"Anticipated regret is one of the great answers to when you should adapt." – Dr. Scott Berry</li><li>"You always get a net gain from looking at the data." – Kert Viele</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of "In the Interim," we consider the nuances of adaptive design clinical trials with distinguished guests <a href="https://www.berryconsultants.com/team-members/scott-berry">Dr. Scott Berry</a> and <a href="https://www.berryconsultants.com/team-members/kert-viele-phd">Dr. Kert Viele</a> from <a href="https://www.berryconsultants.com/">Berry Consultants</a>. The conversation centers around the vital question: when should these adaptive designs be implemented? Listeners will gain invaluable insights into the mechanics of adaptive trials, the Bayesian approach, and scenarios where these designs prove most effective. Whether you're involved in clinical research or simply intrigued by the evolution of clinical trials, this episode enriches your understanding with expert perspectives and practical examples.</p><p><strong>Key Highlights:</strong></p><ul><li>Dr. Scott Berry and Dr. Kert Viele discuss the core principles and benefits of adaptive design clinical trials.</li><li>A distinction is drawn between adaptive and fixed trials, showcasing the flexibility and efficiency of adaptive methods.</li><li>The speakers explore common adaptations, including sample size modifications and response adaptive randomization.</li><li>Strategies to handle anticipated regret and buyer's remorse in trial design are thoroughly examined.</li><li>The episode provides practical advice on identifying suitable scenarios for adaptive trials, emphasizing the importance of timely information.</li></ul><p><strong>Quotes:</strong></p><ul><li>"The promise of an adaptive trial is creating prospective changes based on the accumulating data." – Dr. Scott Berry</li><li>"If I knew enough to perfectly design my trial, I wouldn't need to run my trial because I already know what the answers are." – Dr. Kert Viele</li><li>"Anticipated regret is one of the great answers to when you should adapt." – Dr. Scott Berry</li><li>"You always get a net gain from looking at the data." – Kert Viele</li></ul>]]>
      </content:encoded>
      <pubDate>Mon, 03 Mar 2025 07:15:51 -0600</pubDate>
      <author>Berry</author>
      <enclosure url="https://media.transistor.fm/f3b6068f/acc11ef3.mp3" length="29780752" type="audio/mpeg"/>
      <itunes:author>Berry</itunes:author>
      <itunes:duration>1859</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode of "In the Interim," we consider the nuances of adaptive design clinical trials with distinguished guests <a href="https://www.berryconsultants.com/team-members/scott-berry">Dr. Scott Berry</a> and <a href="https://www.berryconsultants.com/team-members/kert-viele-phd">Dr. Kert Viele</a> from <a href="https://www.berryconsultants.com/">Berry Consultants</a>. The conversation centers around the vital question: when should these adaptive designs be implemented? Listeners will gain invaluable insights into the mechanics of adaptive trials, the Bayesian approach, and scenarios where these designs prove most effective. Whether you're involved in clinical research or simply intrigued by the evolution of clinical trials, this episode enriches your understanding with expert perspectives and practical examples.</p><p><strong>Key Highlights:</strong></p><ul><li>Dr. Scott Berry and Dr. Kert Viele discuss the core principles and benefits of adaptive design clinical trials.</li><li>A distinction is drawn between adaptive and fixed trials, showcasing the flexibility and efficiency of adaptive methods.</li><li>The speakers explore common adaptations, including sample size modifications and response adaptive randomization.</li><li>Strategies to handle anticipated regret and buyer's remorse in trial design are thoroughly examined.</li><li>The episode provides practical advice on identifying suitable scenarios for adaptive trials, emphasizing the importance of timely information.</li></ul><p><strong>Quotes:</strong></p><ul><li>"The promise of an adaptive trial is creating prospective changes based on the accumulating data." – Dr. Scott Berry</li><li>"If I knew enough to perfectly design my trial, I wouldn't need to run my trial because I already know what the answers are." – Dr. Kert Viele</li><li>"Anticipated regret is one of the great answers to when you should adapt." – Dr. Scott Berry</li><li>"You always get a net gain from looking at the data." – Kert Viele</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>statistical science, clinical trials</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.berryconsultants.com/" img="https://img.transistorcdn.com/D3ZU3jufor08z5PmBAIexD7RKnvumbHpcogC2R-EUbg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82ZmRl/OTVkMzNmNTNkYjQz/MjI2MmZjMzk0Y2I0/NmE2MC5wbmc.jpg">Scott Berry</podcast:person>
    </item>
    <item>
      <title>The Story of Berry Consultants</title>
      <itunes:episode>1</itunes:episode>
      <podcast:episode>1</podcast:episode>
      <itunes:title>The Story of Berry Consultants</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">a1c15723-643b-49ad-9fea-666099006cd0</guid>
      <link>https://share.transistor.fm/s/4cf46f91</link>
      <description>
        <![CDATA[<p>In the inaugural episode of Berry's "In the Interim...," we sit down with the founders of Berry Consultants, Dr. Don Berry and Dr. Scott Berry. Celebrating their 25th anniversary as a company, they explore the pioneering journey of their firm, known for transforming the landscape of clinical trials with their adaptive and Bayesian methodologies. With stories from their early days to innovative projects on the horizon, this episode provides a fascinating look into how Berry Consultants is redefining clinical research and impacting global health.</p><p>Key Highlights</p><ul><li>The founding story of Berry Consultants and the inspiration behind their unique approach to clinical trials.</li><li>Challenges faced in pioneering adaptive trials and overcoming regulatory hurdles.</li><li>Key collaborations and the influence of FDA approvals in their adaptive trial designs.</li><li>The revolutionary impact of platform trials developed by Berry Consultants during the COVID pandemic.</li><li>Future innovations and aspirations for the next 25 years in clinical trial design.</li></ul><p>Quotes</p><ul><li>"We really were providing the service that they didn’t know they needed." – Dr. Don Berry</li><li>"Our methodology allows the client to be the biggest advocate for the direction they're going." – Dr. Scott Berry</li><li>"We want to change the world, one person at a time." – Dr. Don Berry</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In the inaugural episode of Berry's "In the Interim...," we sit down with the founders of Berry Consultants, Dr. Don Berry and Dr. Scott Berry. Celebrating their 25th anniversary as a company, they explore the pioneering journey of their firm, known for transforming the landscape of clinical trials with their adaptive and Bayesian methodologies. With stories from their early days to innovative projects on the horizon, this episode provides a fascinating look into how Berry Consultants is redefining clinical research and impacting global health.</p><p>Key Highlights</p><ul><li>The founding story of Berry Consultants and the inspiration behind their unique approach to clinical trials.</li><li>Challenges faced in pioneering adaptive trials and overcoming regulatory hurdles.</li><li>Key collaborations and the influence of FDA approvals in their adaptive trial designs.</li><li>The revolutionary impact of platform trials developed by Berry Consultants during the COVID pandemic.</li><li>Future innovations and aspirations for the next 25 years in clinical trial design.</li></ul><p>Quotes</p><ul><li>"We really were providing the service that they didn’t know they needed." – Dr. Don Berry</li><li>"Our methodology allows the client to be the biggest advocate for the direction they're going." – Dr. Scott Berry</li><li>"We want to change the world, one person at a time." – Dr. Don Berry</li></ul>]]>
      </content:encoded>
      <pubDate>Tue, 18 Feb 2025 14:50:38 -0600</pubDate>
      <author>Berry Consultants</author>
      <enclosure url="https://media.transistor.fm/4cf46f91/8a1e98fa.mp3" length="35005307" type="audio/mpeg"/>
      <itunes:author>Berry Consultants</itunes:author>
      <itunes:duration>2186</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In the inaugural episode of Berry's "In the Interim...," we sit down with the founders of Berry Consultants, Dr. Don Berry and Dr. Scott Berry. Celebrating their 25th anniversary as a company, they explore the pioneering journey of their firm, known for transforming the landscape of clinical trials with their adaptive and Bayesian methodologies. With stories from their early days to innovative projects on the horizon, this episode provides a fascinating look into how Berry Consultants is redefining clinical research and impacting global health.</p><p>Key Highlights</p><ul><li>The founding story of Berry Consultants and the inspiration behind their unique approach to clinical trials.</li><li>Challenges faced in pioneering adaptive trials and overcoming regulatory hurdles.</li><li>Key collaborations and the influence of FDA approvals in their adaptive trial designs.</li><li>The revolutionary impact of platform trials developed by Berry Consultants during the COVID pandemic.</li><li>Future innovations and aspirations for the next 25 years in clinical trial design.</li></ul><p>Quotes</p><ul><li>"We really were providing the service that they didn’t know they needed." – Dr. Don Berry</li><li>"Our methodology allows the client to be the biggest advocate for the direction they're going." – Dr. Scott Berry</li><li>"We want to change the world, one person at a time." – Dr. Don Berry</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>statistical science, clinical trials</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Guest" href="https://www.berryconsultants.com/" img="https://img.transistorcdn.com/sMT7m6cLBxpBe68Y93f4thTn4HeRQul45USdMF7yR40/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82ZTBl/ZjBlMmFkZjU3NjYx/OTI0MmYzY2E0NWQ0/OTIyMC5wbmc.jpg">Don Berry</podcast:person>
      <podcast:person role="Host" href="https://www.berryconsultants.com/" img="https://img.transistorcdn.com/D3ZU3jufor08z5PmBAIexD7RKnvumbHpcogC2R-EUbg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82ZmRl/OTVkMzNmNTNkYjQz/MjI2MmZjMzk0Y2I0/NmE2MC5wbmc.jpg">Scott Berry</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/4cf46f91/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/4cf46f91/transcript.json" type="application/json"/>
    </item>
  </channel>
</rss>
