The corporate adoption of generative AI for strategic planning rests on a seductive premise: that the quality of a strategy is primarily a function of the quality of the analysis behind it. If that premise is correct, then tools that can synthesize vast bodies of evidence, generate structured options, and produce polished strategic documents faster than any human team should produce better strategies. The premise is wrong, and the research on why it is wrong has direct implications for every organization that is currently delegating its strategic thinking to a machine.
Research from Harvard Business School examining what generative AI actually does for different categories of workers — published by Fabrizio Dell'Acqua and colleagues in 2023 and subsequently replicated across organizational settings — found that the technology dramatically boosts the output of practitioners with existing domain expertise, people who can evaluate what the machine produces, identify its errors, and direct it toward better questions.
For workers without that foundational context, the performance picture is more complicated: AI can mask the absence of expertise by producing plausible-sounding outputs that the user cannot critically evaluate. They could not distinguish a good AI output from a convincing one. They could not identify what the machine had missed. They accepted what they received.
The belief problem that analysis cannot solve
Translate this into organizational strategy and the implications are stark. The strategy-execution gap is not primarily a document quality problem. It is a belief problem. The research on what actually predicts successful strategy execution is consistent across sectors and decades: teams that strongly believe in the direction they are executing are 3.4 times more likely to report successful implementation of strategic priorities.
Organizations with high levels of strategic belief — the condition in which people understand why the choices were made, what alternatives were rejected, and how the logic connects to the constraints they face in the field — are 2.5 times more likely to outperform industry peers.
That belief cannot be communicated downward from a document, however well-crafted. It is built in the process of building the strategy — in the deliberation over trade-offs, the arguments about which risks are acceptable, the moments when a key stakeholder's constraint forces the revision of an assumption that everyone else had been treating as fixed.
The people who participated in those conversations carry a different relationship to the resulting plan than the people who received it as a finished product. They defend it adaptively when reality diverges from the plan's assumptions. They know why the choices were made and can therefore make good local judgments when circumstances change. People who received the plan as a document cannot do this. They can comply. They cannot own.
The IKEA effect at organizational scale
Michael Norton, Daniel Mochon, and Dan Ariely documented in 2011 what they called the IKEA effect: the finding that people place disproportionately higher value on products they have assembled themselves, relative to identical products they received finished. This effect is robust, replicates across contexts, and is driven not by the finished product's quality but by the labor invested in producing it. The act of building something creates a psychological relationship to the outcome that receipt of that thing cannot replicate.
This dynamic operates at organizational scale. The executive team that spent three days wrestling with the question of which markets to exit, which capabilities to build, and which external dependencies to resolve before launch has built their strategy in the IKEA sense. They understand the logic behind each element because they argued about it. They can explain the reasoning to their own teams.
When the first major obstacle appears, they have the shared context to navigate around it without abandoning the architecture. The executive team that received a consultant-produced strategy document, or an AI-generated strategic framework, has received a flat-pack — but without having assembled it. The pieces are all there. The structural knowledge required to put them together under pressure is not.
What AI cannot replace
None of this argues against the use of AI as an input into strategic planning. The technology is genuinely useful for synthesizing research, generating an initial option space, stress-testing assumptions against available data, and producing documentation. These are real contributions to the analytical substrate of strategy. The error is treating analytical quality as the binding constraint when the binding constraint is almost always human: the absence of shared ownership, tested trade-offs, and committed accountability among the people who must execute.
The organizations most at risk from the current AI moment in strategy are not those that use the technology poorly. They are the ones that use it well — that produce sophisticated, coherent, beautifully documented strategic frameworks — and then discover that the document's quality has no bearing on the commitment of the people who must implement it. The strategy-execution gap does not close because the strategy got smarter. It closes when the people responsible for execution were present when the hard choices were made.
Frequently Asked Questions
Why does using AI to generate strategy create an execution problem?
Strategy-execution success depends not on the quality of the document but on the depth of belief the implementing team holds in the direction. When AI or external consultants generate the strategy, the people responsible for execution encounter it as a finished product rather than a decision they participated in building. They can understand it intellectually while remaining strategically indifferent to it. That indifference is the execution problem.
What is the expertise distance and why does it matter for strategy?
The expertise distance is the gap between the knowledge embedded in a strategic plan and the knowledge held by the people who must implement it. When plans are built by external experts or algorithms, that gap is large. Implementers lack the contextual understanding of why specific choices were made and what alternatives were rejected — understanding that is essential for adapting the plan intelligently when reality departs from assumptions.
What does research show about the relationship between involvement and execution commitment?
Teams that strongly believe in the strategy they are executing are 3.4 times more likely to report successful execution of strategic priorities. That belief cannot be communicated or mandated — it is built through the process of wrestling with trade-offs in real time. The IKEA effect, documented by Norton, Mochon, and Ariely, demonstrates that the act of labor produces a disproportionate valuation of the outcome. This dynamic operates in organizational strategy as directly as it does in flat-pack furniture.