The FDA Says It’s “Open to Bayesian Statistics.”
What that actually means... and why the messaging matters
On January 13, 2026, Commissioner of the United States Food and Drug Administration (FDA) Dr. Marty Makary posted a video on X (formerly Twitter) announcing that the FDA “is open to Bayesian statistics” and will release new guidance encouraging their use in clinical trial design and analysis.
What Makary said:
“The FDA is open to Bayesian statistics. We are putting out new guidance to encourage the use of Bayesian statistics in clinical trial design and the readout of results. Now, if you’re not familiar with Bayesian statistics, it is a leap forward beyond the frequentist model of analyzing data. And it has many potential uses. For example, it can help in clinical trial design, it can help identify the optimal dose of a drug, and it can be used to extrapolate to pediatric populations, which, as you know, are often a forgotten population when drugs are developed. And, for example, if you have a small clinical Phase II trial, that data can be informative to the Bayesian analysis of a Phase III clinical trial. So it is a very big step in the statistical mathematical community. We want companies and sponsors to benefit from the power of Bayesian statistics. So it’s an exciting day, and more to come.” - Dr. Marty Makary, FDA Commissioner
At first glance, this sounds like a major scientific breakthrough. Or, at the very least, a dramatic regulatory shift.
Makary described Bayesian statistics as a “leap forward” and a “very big step,” framing the announcement as an effort help companies design better trials, optimize dosing, include pediatric populations, and move drugs through development more efficiently.
But Bayesian statistics isn’t a new approach. And it was already allowed. Bayesian methods have appeared in drug development for years.The FDA first released guidance on the use of Bayesian statistics in medical device trials in 2010.
What is this announcement really doing? Who is it for? What are the risks beneath the optimism?
Although the video was posted publicly, the language makes clear that it wasn’t written for the public. It was written for biostatisticians, regulatory affairs teams, pharmaceutical executives, academic trial designers…
Terms like Bayesian statistics, frequentist models, Phase II and Phase III trials, and extrapolation to pediatric populations are generally meaningful only to people already knowledgeable about drug development. For everyone else, the message is opaque, technical, and… boring.
Intentionally so.
The FDA often communicates this way on methodological issues, talking over rather than to the public. Partly because the public can’t directly act on them. But there’s another strategic advantage, in that dense, jargon-heavy messaging substantially reduces the risk of political soundbites, social media outrage cycles, and journalistic oversimplification. Plain-language explanations invite controversy; the messaging used here discourages it, and it’s paired with hype (“leap forward”, “a very big step”, “exciting day”) and moral framing (this will help children) that will leave non-experts thinking “I don’t know what any of this means, but it sounds great!”.
What the FDA is (and isn’t) announcing
“The FDA is open to Bayesian statistics. We are putting out new guidance to encourage the use of Bayesian statistics in clinical trial design and the readout of results.”
What’s new here is not the methods but the direct encouragement to use these methods and detailed guidance for doing so.
Guidance matters because it tells investigators how to use a method in a way that regulators are likely to accept, reducing regulatory risk. It says, “You can bring us Bayesian designs, and we won’t treat them as suspicious.” That type of change and messaging is often sufficient to change industry behavior.
“Now, if you’re not familiar with Bayesian statistics, it is a leap forward beyond the frequentist model of analyzing data.”
In simple terms, the frequentist model assumes you start from zero and asks
“If this treatment actually does nothing, how surprising would these results be?”
The Bayesian approach starts from the fact that we already know some things and instead asks:
“Given what we already know and what this new data shows, how likely is it that this treatment actually works?”
The frequentist method tries to avoid assumptions by focusing on how strange the data looks under a no-effect scenario.
The Bayesian approach openly combines prior knowledge with new evidence to estimate what is most likely true now.
It’s the difference between judging a movie by one review versus looking at that review alongside the trailer, the director’s past work, and what other people are saying, then updating your opinion as more reviews come in. While the Bayesian approach does align better with how scientists think, recognizing that knowledge accumulates and doesn’t ‘reset’ at every stage of investigation, the flexibility the approach introduces is also where the risks increase.
The hidden risks
Bayesian methods can incorporate bias because the analyses depend on assumptions that are made before the new data are collected.
These assumptions are called priors. Priors can be overly optimistic (especially if based on weak Phase II data). They can also be selected in ways that subtly favor success. Two teams can analyze the same data and reach different conclusions depending on priors. So, the flexibility of the Bayesian approach can make results look stronger without anyone falsifying data. Bad actors don’t have to cheat, just frame.
Priors can also be misleading. If you’re using data from past trials, real-world evidence, and related drugs but it’s outdated and doesn’t consider new standards of care, different populations, and updated disease definitions… poorly matched, biased, and outdated data can contaminate new conclusions, and it might not be obvious that it’s happening.
Bayesian models also tend to be complex, incorporating hierarchical structures, simulations, and sensitivity analyses that many clinicians and reviewers (and certainly the public) cannot independently interrogate. This creates a subtle trust-the-model problem, where we’re looking at outputs and being expected to trust what a model is giving us without really understanding why it’s giving that output or how it got there.
Trial design
Makary lists several applications of Bayesian statistics, each with benefits and tradeoffs.
it can help in clinical trial design
Bayesian statistics can make it possible to design trials that are smaller and adaptive. You can adjust mid-trial according to accumulating evidence, potentially making trials cheaper, shorter, and potentially less risky as patients may be less likely to be exposed to ineffective doses.
But adaptive trials can blur the line between testing a hypothesis and optimizing outcomes. You might get conclusions like “this drug works somewhere under some conditions” rather than rather than a clean yes or no, which complicates labeling and real-world use.
Drug dosing
it can help identify the optimal dose of a drug
Bayesian methods allow investigators to continuously update beliefs about dose–response relationships, which is especially useful when dose effects aren’t clean or linear. This matters because bad dosing can result in clinical trial failure for drugs that might otherwise work, and traditional trial designs often finalize doses too early.
BUT. There’s a tradeoff in that bringing in priors from early studies can result in a trial that produces correct information within the context of the trial itself, but completely wrong when considering the drug overall. Using Bayesian methods can smooth out differences in the dose-response relationship in subpopulations and make data appear more precise than they are. Essentially, minimizing uncertainty. And optimizing for shorter trials can miss rare adverse events, long-term side effects, cumulative toxicity, and population-specific differences in dose responses.
Pediatric patient application
it can be used to extrapolate to pediatric populations, which, as you know, are often a forgotten population when drugs are developed
This is ethically powerful and politically defensible. This example was clearly chosen because of the moral framing around it… who wouldn’t want to support something that helps children? Children are often excluded from trials. Bayesian methods do enable the use of adult data as a prior to inform decisions around treating children.
But children are not just small adults. They have different metabolism, immune responses, risks (e.g., developmental risks)... Over-reliance on adult priors risks underestimating developmental harms and missing age-specific effects.
Phase II/II trials
And, for example, if you have a small clinical Phase II trial, that data can be informative to the Bayesian analysis of a Phase III clinical trial.
This shift is quite downplayed in the messaging, but it’s important. Phase II and Phase III trial data are usually treated as (mostly) independent. With a Bayesian approach, earlier trials become formal inputs, not just informal context. It does make sense from a scientific perspective to use the knowledge we have in future studies.
However, the tradeoff is that weaknesses, biases, and chance findings from small Phase II trials can be amplified rather than corrected, especially when early studies are underpowered or overly optimistic. Once embedded as priors, these early data can artificially drive subsequent analyses toward success that doesn’t really exist in the real world. The Phase III is then no longer independent confirmation. So it increases efficiency, but also increases the risk that uncertainty is buried rather than resolved.
If used well, Bayesian statistics could result in faster and smarter trials. If not, it could introduce bias, reduced transparency, reduced trust… and result in the already-strong players in the pharmaceutical market continuing to ‘win’ more often. Which brings me to…
Who wins? (Big Pharma… mostly)
We want companies and sponsors to benefit from the power of Bayesian statistics.
This says, “Don’t be afraid to bring us Bayesian designs.” Encouraging innovation; reducing regulatory fear.
But Bayesian trials are expensive. They’re hard to to design correctly.
Large pharma benefits the most, with their in-house statisticians, regulatory experience, and ability to run multiple sensitivity scenarios. Smaller biotech companies and academic trials may struggle without statistical expertise. So the approach favors larger organizations, which can reduce the diversity of innovation and consolidate power among the already-dominant players.
The political play
This guidance and messaging fits into the larger narrative that Makary has been constructing since his appointment, positioning him as a contrarian pushing back against ‘medical groupthink’ and a slow, bureaucratic FDA. It’s part of a pattern, which includes one-trial approval and plausible mechanism pathway announcements.
Makary frames these changes as positive by using using language like ‘common sense,’ ‘flexibility,’ and ‘modernization’; rhetoric that is highly effective from a political comms perspective as it preemptively characterizes the opposition and any calls for caution as ‘anti-progress’ or part of the bureaucracy he is opposing.
The danger is decoupling the narrative from reality.
The administration can claim victory based on announced policies and guidelines regardless of how these performs in practice. They could feasibly say that they modernized the FDA to bring cures to patients faster and point to the Bayesian statistics guidance as evidence… and the public, hearing the positive framing of a ‘leap forward’, etc, may view this as a significant accomplishment. But Bayesian methods have existed for years. Their impact depends entirely on how they’re implemented. That nuance disappears in the headline.
So political credit could be claimed for advancing science through deregulation even if the changes introduce risks or do not yield the promised benefits. Positive media coverage generated by an initial, simplified announcement can become political reality without real-world impact.
So it’s important to look at the detailed, often more complex, reality underneath bold announcements. The consistent theme of Makary’s FDA reforms is speed and flexibility. That’s not inherently negative, but do they hold up under scrutiny? There’s substantial risk of lowering the evidentiary standards for drug approval.
There’s also the issue that while flexibility is being promoted for therapeutics, leaked internal discussions suggest stricter evidentiary standards for vaccines. So is that ‘common sense’ approach being applied consistently, or is it being applied selectively according to political views?
The politicization of science has become a massive risk. When major scientific policy shifts are aligned with specific political agendas (deregulation, in this case), it creates the perception (and, likely the reality) that science at the FDA is being politicized, which reduces public trust in the agency’s scientific integrity. Former FDA leaders have explicitly warned about this.
Where science ends and messaging begins
The FDA’s guidance on Bayesian statistics isn’t pseudoscience. It’s a genuine approach. It could lead to better evidence and faster patient access to treatment… if it is applied carefully.
But the messaging around this matters.
The announcement made by the FDA is more focused on advancing a political narrative of deregulation and efficiency than on the actual science itself. Bayesian statistics are powerful tools, but whether this ‘very big step’ improves actual patient outcomes or just accelerates approvals remains to be seen.
When science becomes political, it’s easy for political credit to be claimed without any accountability for actual outcomes.
And that’s the part that’s worth watching.

