In environments defined by high-stakes complexity, certainty is not just rare — it is often the most dangerous thing in the room. A landmark evaluation of clinical AI deployment published by Stanford and Harvard researchers in early 2026 revealed a precise and troubling vulnerability in the systems that have been most aggressively adopted in diagnostic medicine.
When faced with missing information or high ambiguity — exactly the conditions that define the most important and consequential decisions — AI systems did not acknowledge the limits of their knowledge. They committed, confidently, to a single path. And they committed to wrong answers at a rate that significantly exceeded the error rate of experienced clinicians working with the same incomplete data.
The clinical context is not an edge case that can be quarantined within healthcare. It is a revealing test environment for a failure mode that operates across every domain where decisions must be made under genuine uncertainty. Corporate strategy — where multiple constraint owners hold information that has not been fully disclosed, where the relevant data set is always incomplete, where the most important variables are often the ones no one has named yet — is precisely this kind of environment.
The organizations that are deploying AI as a strategic decision-support tool without understanding this limitation are not making their strategy process more rigorous. They are adding a layer of confident-sounding output to a process that still carries all of its prior uncertainties, now wrapped in the authority of algorithmic certainty.
What machines cannot do with ambiguity
Nassim Nicholas Taleb's work on Black Swan events established that the most consequential disruptions to complex systems are precisely the ones that lie outside the domain of historical data — events that a model trained on the past cannot anticipate because they have no precedent in the training set.
The organizational equivalent of the Black Swan is the constraint that has not been disclosed, the assumption that everyone has been treating as fixed but which is actually contested, the stakeholder position that exists but has not been expressed in any forum where it could be captured as data. These are not edge cases. They are the norm in complex multi-stakeholder strategic environments.
Human deliberation, conducted in the right conditions, has the capacity to surface this hidden information. A well-designed multi-stakeholder session creates the conditions under which a participant who knows something critical — a constraint, a risk, a conflicting priority — is more likely to name it than to keep it private. The structured surfacing of that hidden information is not a supplement to good analysis. It is the analysis. The information that cannot be extracted from any data set because it exists only as a private judgment in someone's mind is often the most strategically significant information in the room.
The function of friction in robust decisions
The discomfort that characterizes genuine multi-stakeholder deliberation — the challenge to an assumption, the dissenting voice that refuses to let a risk be glossed over, the constraint owner who names a condition that makes an apparently simple solution far more complicated — is not a failure of process design.
It is the process design working correctly. Paul Schoemaker and Philip Tetlock's research into strategic risk assessment established that human imagination and what they called moral friction — the willingness to engage genuinely with uncomfortable scenarios rather than treating them as too unlikely to warrant serious attention — are the irreplaceable elements in genuine risk identification. These capacities are not replicated by an algorithm that has been trained to produce outputs efficiently.
The strategies that hold under pressure — that survive their first significant contact with a reality that departed from the plan's assumptions — are the ones whose assumptions have been genuinely stress-tested by informed humans who had different views about which risks were real. The pre-mortem, the devil's advocate, the structured red team: these techniques exist precisely because their function cannot be performed by a process in which everyone agrees with the analysis.
They exist to create the human friction that identifies the failure modes before execution encounters them. AI can surface the option space and stress-test against historical patterns. It cannot perform the moral imagination required to ask: what is the failure scenario that our assumptions have made invisible to us?
Designing for productive uncertainty
The organizations managing complex strategy most effectively in high-ambiguity environments are not the ones with the most sophisticated analytical infrastructure. They are the ones that have designed their decision processes to acknowledge and work with uncertainty rather than to paper over it with confident outputs. Scenario-based strategy — designing for coherence across a range of possible futures rather than optimizing for the single most likely one — is not a concession to analytical weakness. It is an accurate response to the structure of complex environments where the most important variables cannot be known in advance.
That design is human work. It requires the people who understand the system's constraints to be in the room, naming the conditions under which each strategic option would succeed or fail. It requires a facilitation structure that makes it safe to name the uncomfortable scenario rather than the comfortable one. It requires the friction of genuinely different views about which risks are real.
The algorithm cannot provide this. The algorithm can tell you what the data shows. The data does not include the constraint that has not yet been disclosed, the risk that has not yet been named, or the assumption whose incorrectness will only become visible in execution. The human deliberation has to go there. It is the only process that can.
Frequently Asked Questions
Why is AI unreliable in high-ambiguity strategic environments?
AI systems are trained to produce outputs — to resolve ambiguity rather than hold it. In high-stakes strategic environments, the missing information is often the most important information: the constraint that has not been surfaced, the stakeholder position that has not been disclosed, the assumption everyone has treated as fixed but which is actually negotiable. Human deliberation has the capacity to surface that missing information. Algorithmic processing does not.
What role does human friction play in producing robust strategic decisions?
Human friction — the productive discomfort of having one's assumptions challenged, of being required to defend a position against an informed dissenting view — is the mechanism through which brittle strategies become robust ones. Strategies that have not been through genuine deliberative friction are strategies whose failure modes have not been discovered in the room. They discover them in execution instead.
How should organizations structure decisions when data is incomplete?
The most robust approach is scenario-based: explicitly designing the strategy to be coherent across multiple possible futures rather than optimized for the single most likely one. This requires surfacing the assumptions whose failure would most significantly damage the strategy, stress-testing those assumptions against the range of realistic outcomes, and building explicit contingency commitments before the primary plan launches. This is human work that requires the imagination and deliberative friction that no algorithm can replicate.