When a strategic initiative stalls or collapses, the diagnosis is almost always the same: weak analysis, poor execution, insufficient buy-in. The organization commissions more research. It builds a better deck. It runs another offsite. The initiative fails again, and the same diagnosis follows. The research suggests something different is happening — not a failure of analytical capability, but a failure of deliberation architecture.

Process quality explains six times more variance in decision outcomes than the quantity or quality of analysis. Moving from bottom- to top-quartile decision process improved ROI by 6.9 percentage points.Dan Lovallo and Olivier Sibony, "The Case for Behavioral Strategy," McKinsey Quarterly, 2010

This is a finding worth sitting with. It does not say analysis is irrelevant. It says that superb analysis, fed into an unstructured deliberation, is routinely overridden — by the loudest voice, the most senior person, or the most emotionally compelling narrative in the room. The analysis that was commissioned to inform the decision rarely survives contact with the cognitive and social dynamics of an unmanaged group.

The five families of bias that corrupt deliberation

Lovallo and Sibony classify the mechanisms of this failure into five families. Pattern-recognition biases — including confirmation bias — cause groups to notice evidence that supports the emerging consensus and discount evidence that challenges it. Action-oriented biases, particularly overconfidence and planning fallacy, compress timelines and inflate confidence in outcomes. Stability biases anchor groups to last year's decisions and make loss feel twice as painful as equivalent gain. Interest biases — silo thinking, misaligned incentives — ensure that participants protect their own organizational territory even when the strategic logic points elsewhere. And social biases cause individuals to defer to authority and conform to apparent consensus rather than surface the dissenting view they have privately held since the data was presented.

Each of these is active in any conventional strategy session. They do not cancel each other out. They compound. And no amount of additional analysis interrupts them, because they are not operating on the analytical content of the session — they are operating on the social and cognitive dynamics that surround it.

What a better process actually looks like

The McKinsey research is direct: only 28% of executives believe the quality of strategic decisions in their organizations is generally good. Sixty percent believe bad decisions are about as frequent as good ones. This is not a talent problem. Most of these organizations have exceptional analytical capacity. What they lack is structural intervention against the biases that prevent that capacity from translating into sound decisions.

The interventions are not complicated. A pre-mortem before the session — where participants write down every reason the initiative could fail before deliberation begins — surfaces privately-held risk assessments without requiring anyone to personally challenge authority. A formal red-team role gives dissent a structural channel rather than a social cost. Forced prioritization at the close of every session — where participants must rank options or allocate finite resources — prevents the stability biases that cause every existing program to survive and every new priority to be endorsed without displacing anything.

Only 28% of executives said the quality of strategic decisions in their companies was generally good. 60% believed bad decisions were about as frequent as good ones.Dan Lovallo and Olivier Sibony, "The Case for Behavioral Strategy," McKinsey Quarterly, 2010

None of these interventions require more time than the sessions already consume. They require a different design. The question is whether the person running the session is optimizing for comfort — the smooth flow of agreement — or for the quality of the decision that emerges.

The diagnostic question most leaders never ask

Before your next high-stakes strategy session, the most valuable question is not "Do we have the right analysis?" It is: "What is the deliberation architecture of this session, and what structural mechanisms exist to surface dissent, force genuine trade-offs, and prevent the biases we know are present from determining the outcome?"

If the honest answer is "none," you already know why the last three strategic initiatives produced less than they should have. And you know what to change before the next one.