MMG's perspectives on complexity, stakeholder alignment, and the structural conditions that separate organizations that execute from those that produce strategies.
Reading the Research
April 2026
What separates a strategy that holds under pressure from one that collapses during execution? The gap is almost never the quality of the plan. It is the quality of the human system built to carry it — the teams, the trust, the leadership behaviors, and the shared ownership that determine whether a strategy survives contact with reality. Four articles from the May/June 2026 issue of Harvard Business Review converge on the same finding: the organizations that perform through difficulty are not the ones with better plans. They are the ones that built the conditions for honest thinking, shared ownership, and continuous adaptation before the difficulty arrived. Superteams experiment more and surface obstacles early. Humble leaders hold teams together through disruption. Upskilling ripples upward to free managerial bandwidth. Structure converts fear into focus.
The Diagnostic
Research
Why do well-resourced strategies so often fail during execution, even when the analysis is sound? Because the analysis was the wrong tool for the problem. Most leadership teams apply expert-analytical methods to challenges that are fundamentally complex and multi-stakeholder — and the research on this misclassification is unambiguous. MMG's complexity diagnostic maps your challenges across two dimensions: how much the people who must act hold competing incentives, and how well the available solutions are actually understood. The combination tells you what kind of challenge you are facing — and, more importantly, what kind of intervention can actually move it. Deploying the wrong intervention for a year costs far more than the five minutes it takes to classify the problem correctly first.
The Problem
Research
Why do between 67% and 90% of organizational strategies fail during execution — and what does that failure rate actually tell us? It tells us the problem is almost never a bad idea. It is a misdiagnosis. When leaders apply expert-analytical methods to challenges that are fundamentally complex and multi-stakeholder, failure becomes the statistically predictable outcome — not evidence of poor execution, but the logical result of the wrong tool applied to the wrong type of problem. The HealthCare.gov collapse consumed $2.1 billion before it produced a working system. The NHS's £12.4 billion IT programme was abandoned. Both were competently managed complicated-domain solutions applied to complex-domain problems. The methodology was the failure.
The People
Research
Why do strategies built by external experts so often fail to get implemented, even when the analysis is rigorous? Because the primary predictor of implementation is not the quality of the analysis — it is whether the people responsible for execution were in the room when the decisions were made. A meta-analysis of 351,919 individuals confirms that psychological ownership, the cognitive state where implementers feel the strategy is genuinely theirs, is one of the strongest known predictors of task performance. Cognitively diverse teams also solve complex problems up to three times faster than homogeneous expert groups. The question is not whether to involve people. It is which people, when, and under what conditions — because diversity without psychological safety produces noise rather than strategy.
The Process
Research
If we have the right people in the room and strong analytical work, why do so many strategy sessions still produce decisions that don't hold? Because process quality explains far more of decision outcomes than analytical quality — by a factor of six, according to McKinsey's study of 1,048 major business decisions. Unstructured group deliberation is reliably captured by the loudest voice, the most senior person, or the most emotionally compelling narrative, regardless of what the data says. Structured interventions — pre-mortems, red-teaming, forced trade-offs — counteract those biases by design. Organizations that embed structured decision architectures improve project success rates by up to 40%. The process is not the scaffolding around the real work. It is the mechanism that converts a room full of people into a decision that survives contact with execution.
Mind Meeting Group
The Diagnostic · Research
Most leaders reach for the same tool for every challenge. Strategic retreats. Expert reports. Task forces. When these interventions fail to produce change, the instinct is to conclude that the strategy wasn't good enough, or the team wasn't committed enough, or the execution wasn't tight enough. The research suggests a different explanation: the tool was wrong for the problem. Not the quality of the tool — the category of it.
The complexity diagnostic built by Mind Meeting Group maps your challenges across two dimensions drawn from complexity science, and the model is more than a sorting device. It is a guide to choosing the right kind of intervention before you spend a year applying the wrong one.
The first dimension is Social Complexity — the degree to which the people who must act on a challenge hold competing incentives, different mandates, or independent veto power over outcomes. When a challenge requires genuine action from multiple distinct groups whose interests are not aligned, no amount of internal analytical rigour will produce execution. The coordination problem is the problem.
The second dimension is Solution Knowability — the degree to which the path forward is already understood. Some challenges have clear best practices and established precedent. Subject matter experts generally agree on the technical approach. The main challenge is simply doing the known thing well. Others have no such clarity: the evidence is genuinely ambiguous, prior approaches have failed or returned only temporary results, and the field has not converged on an answer.
Together, these two axes produce four distinct zones — and each zone requires a fundamentally different type of response.
When social complexity is low and the solution is well understood, you have a Clear challenge. Standard project management applies. Assign accountability, resource the work, execute.
When social complexity is low but the solution requires expert analysis to surface — the evidence is complex but convergent — you have a Complicated challenge. This is the terrain of traditional consulting. Deploy specialists, conduct rigorous analysis, implement the findings.
When both dimensions are high — multiple actors with conflicting incentives, and no clear path forward — you have a Complex challenge. This is what MMG calls the Village Problem: the kind of challenge where everyone agrees it matters and nobody agrees on what to do, because the actors who need to move are the same actors whose competing interests created the problem. Expert analysis doesn't solve this. Convening does — structured, multi-stakeholder co-creation that produces decisions the people responsible for execution helped build.
When the social situation is contested but the problem itself is not yet clearly enough defined to act on, you have a Contested or Liminal challenge. Before you can convene the right people around a solution, someone needs to define the problem with enough precision to commission an intervention. That is a prior step — and skipping it is one of the most common reasons that well-resourced, well-intentioned strategy processes stall before they start.
The two-axis model draws on two established frameworks in the complexity science literature.
The Cynefin Framework, developed by Dave Snowden and published in a landmark 2007 Harvard Business Review paper, classifies organizational challenges into distinct ontological domains — each with its own cause-and-effect logic and its own prescribed response. Cynefin's core insight is that leaders systematically misclassify complex challenges as complicated ones, applying analytical tools to problems that are fundamentally non-linear and socially constituted. The result is not just a failed initiative. Snowden describes the worst-case trajectory as "the cliff": when rigid best practices are applied to a complex adaptive system, the system's instability eventually tips into chaos.
The Stacey Agreement-Certainty Matrix, developed by organizational theorist Ralph Stacey in the late 1990s, maps challenges along two axes that correspond directly to MMG's diagnostic dimensions: Degree of Certainty (how well the solution is understood) and Degree of Agreement (how aligned the actors are on goals and trade-offs). Stacey's contribution is to make the coordination dimension explicit as a primary determinant of problem type — not just a complication that overlays a technical problem, but a structural feature that changes the category of the problem entirely.
MMG's diagnostic tool synthesizes these frameworks into a Cartesian mapping that is actionable for practitioners. The two-axis reduction has been validated by research on so-called Wicked Problems — challenges characterized by stakeholder fragmentation and epistemic uncertainty — which confirms that social complexity and solution knowability account for the large majority of variance in whether a conventional consulting engagement will succeed or fail.
We already know what our challenges are. What does rating them tell us that we don't already know?
No physician prescribes a treatment before completing a diagnosis — and in medicine, prescription without diagnosis is considered malpractice. The same logic applies here. Most leadership teams have a working list of priorities, but listing challenges is not the same as classifying them. Rating each challenge against the two dimensions produces something a priority list cannot: a map of what kind of problem each challenge actually is, which determines what kind of intervention it can respond to. The diagnostic takes five to eight minutes. Deploying the wrong intervention for a year costs considerably more than that — in budget, in credibility, and in the window of opportunity that closes while the wrong tool is being applied. The goal is not to surface new information. It is to reduce the risk of an expensive misclassification before it becomes a sunk cost.
Why does it matter whether the solution is "known" or not? Isn't that just a question of doing more analysis?
This is precisely the misclassification the model is designed to surface. When a challenge sits in the Complex zone — high social complexity, low solution knowability — more analysis does not produce clarity. The reason there is no clear solution is not that the research is insufficient. It is that the solution will only emerge from the interaction of the actors who must execute it. No amount of additional desk research produces the alignment that is actually missing.
Our organization has both internal teams and external stakeholders. Which challenges belong in this diagnostic?
Any challenge where the people who must act on it are not all inside your decision-making authority. If executing the solution requires other organizations, other departments with independent mandates, regulators, payers, community groups, or any actor you cannot instruct — that challenge has social complexity by definition, and it belongs in the diagnostic.
We've tried convening stakeholders before and it didn't work. How is this different?
The research on stakeholder convening is consistent on this point: the failure mode is not convening itself. It is convening without process architecture. Bringing diverse actors into a room without structured mechanisms for surfacing dissent, forcing genuine trade-offs, and producing decisions — rather than agreement lists — replicates the same social biases that prevent action in the first place. Structure is what converts a room full of well-intentioned people with competing interests into a group that can make decisions that hold.
How do I know if our challenge is Complex or just Complicated with a lot of stakeholders?
Ask whether the solution could be implemented by fiat if you had sufficient authority. If a sufficiently powerful decision-maker could mandate the path forward and the problem would be solved — even if politically difficult — the problem is Complicated. If mandating a solution would produce surface compliance but not the genuine behavioural change required for the outcome to materialize, the problem is Complex. The key indicator is whether the actors whose behaviour needs to change are also the actors whose buy-in is structurally required for any solution to work.
What happens after the diagnostic?
The diagnostic maps your challenge portfolio against the complexity model and identifies the type of intervention each challenge demands. For some, that means better project management. For others, it means expert analysis and traditional consulting. For those in the Complex zone, it points toward a structured multi-stakeholder process — and for those in the Contested zone, it points toward a prior step: defining the problem clearly enough to commission the right intervention at all. The goal is not to tell you what to do. It is to ensure that whatever you do next is calibrated to the actual nature of the challenge — not the nature of the intervention your organization is most comfortable deploying.
Mark McCarvill is the Founder and Principal Facilitator of Mind Meeting Group, a Vancouver-based consulting firm specializing in complex, multi-stakeholder strategy. He has facilitated over 100 decision-grade workshops across life sciences, federal government, not-for-profit, and commercial sectors, aligning more than 3,000 leaders and stakeholders. MMG's methodology is grounded in complexity science, organizational behaviour research, and fifteen years of practice in high-stakes strategic alignment.
Mind Meeting Group
The Problem · Research
Most leaders who commission a strategic review believe their organization has a strategy problem. The research suggests otherwise. The problem is usually a classification problem — a failure to correctly identify what kind of challenge is actually being faced before deciding how to respond to it.
These numbers are alarming enough on their own. But the more important question is: why do strategies fail? The answer that keeps emerging across sectors, decades, and organizational types is not incompetence, insufficient data, or lack of leadership commitment. It is the systematic application of the wrong methodology to the wrong type of problem.
The Cynefin Framework, developed by Dave Snowden and published in a landmark 2007 Harvard Business Review paper, classifies organizational challenges into distinct domains. In the Complicated domain, cause-and-effect relationships are knowable — they may be hidden or require expertise to uncover, but expert analysis can find them. Traditional consulting excels here. In the Complex domain, cause-and-effect relationships are only visible in retrospect; the environment is non-linear and shifts in response to interventions. Expert analysis fails here not because it is poorly executed, but because the system cannot be modelled in advance.
The research confirms that this boundary is where most strategy failures occur — not because organizations are making poor decisions, but because they are making Complicated-domain decisions for Complex-domain problems. The Cynefin literature describes this as "the cliff": when leaders treat a complex, dynamic situation as a simple, linear one, they apply rigid standard operating procedures that eventually push the system into chaos from which recovery is exceptionally difficult.
The 2013 collapse of the HealthCare.gov launch is one of the most thoroughly documented cases of problem misclassification in recent government history. The U.S. Centers for Medicare and Medicaid Services treated the national health insurance marketplace — involving 55 separate contractors, dynamic federal policy, and divergent state-level requirements — as a complicated IT project with a fixed technical solution. Project costs surged from $292 million to $2.1 billion. On launch day, only six people successfully enrolled. The system's complexity — its polycentric governance structure, its multiple actors with independent veto power — was never accounted for in the methodology.
The UK's National Programme for IT tells the same story. A £12.4 billion effort to digitize the entire NHS was delivered as a top-down centralized procurement. It failed because it ignored the social complexity of local hospitals and clinics: doctors and nurses rejected systems that disrupted their workflows, not because the technology was wrong, but because the implementers had never been part of building it. A post-mortem published in the BMJ concluded that the programme's failure was rooted in treating a complex socio-technical transformation as a manageable engineering problem.
The pharmaceutical industry produces perhaps the clearest cases. Sarepta Therapeutics' Exondys 51 — the first FDA-approved treatment for Duchenne muscular dystrophy — faced a near-immediate commercial crisis despite its approval. Sarepta had solved the complicated problem: clinical data, regulatory strategy, orphan drug designation. What they had not solved was the complex one. Payers, including major insurers and pharmacy benefit managers, had never been convened around the drug's value evidence. Coverage denials were widespread at launch, and patient access remained severely constrained for years. The FDA approval was real. The execution ecosystem had never been built.
The Strategy Institute's 2024 analysis identified what it calls "The Vicious Cycle of Misdiagnosis": when an initiative fails, leaders typically attribute the failure to the strategy itself — the analysis wasn't deep enough, the recommendations weren't specific enough — rather than to the methodology. They commission another report. They hire another analytical firm. The diagnosis remains wrong, and the result is the same.
Breaking the cycle requires a prior question: before asking "what is the solution?" organizations need to ask "what kind of problem is this?" That classification decision — Complicated or Complex, Analytical or Village Problem — is not a preliminary formality. It is the single most consequential strategic choice a leadership team will make.
At Mind Meeting Group, the first thing we do is not propose a solution. It is help leaders see the problem type clearly — and understand why the intervention their instincts are reaching for will not close the gap between strategy and execution.
Mark McCarvill is the Founder and Principal Facilitator of Mind Meeting Group, a Vancouver-based consulting firm specializing in complex, multi-stakeholder strategy. He has facilitated over 100 decision-grade workshops across life sciences, federal government, not-for-profit, and commercial sectors, aligning more than 3,000 leaders and stakeholders. MMG's methodology is grounded in complexity science, organizational behaviour research, and fifteen years of practice in high-stakes strategic alignment.
Mind Meeting Group
The People · Research
For decades, the dominant model of organizational strategy has worked like this: hire external experts, give them access to data and senior interviews, wait for a deliverable. The logic seems reasonable — specialists should produce better analysis than generalists. But the research on strategy execution has quietly dismantled this model, finding that the primary predictor of whether a strategy gets implemented is not the quality of the analysis. It is whether the people responsible for implementation were in the room when the decisions were made.
When an external team delivers a completed strategy, implementers encounter it as a document. They understand it intellectually. But they did not participate in the trade-off decisions, the failures of competing options, or the moment the group committed to a direction. That absence matters. Psychological ownership research — confirmed across hundreds of thousands of subjects in multiple countries — shows that the mechanism connecting involvement to execution is not motivation in the generic sense. It is the binding of professional identity to the outcome. Strategies co-created by implementers are defended, adapted, and sustained. Strategies delivered to implementers are archived.
This dynamic operates independently of sector. It appears in pharmaceutical brand teams navigating market access, in federal agency leadership managing modernization programmes, in not-for-profit boards designing service model transitions. The mechanism is the same: people who build the strategy own it.
The ownership argument for co-creation is reinforced by a separate body of research on cognitive diversity — the variance in how individuals perceive, process, and act on new, uncertain information. Reynolds and Lewis's analysis of over 100 executive teams across twelve years, published in Harvard Business Review in 2017, produced a striking result: cognitively diverse teams solved complex problems up to three times faster than homogeneous groups of domain experts.
The mechanism is not that diverse teams are smarter in any aggregate sense. It is that when a homogeneous group encounters a roadblock in a complex environment, every member applies the same failed analytical tools. A cognitively diverse group reframes the problem and applies different heuristics. The variety in the room is the resource.
This has a direct implication for how organizations approach high-stakes strategy sessions. Convening only the internal team — or only the senior leadership tier — is not a neutral design choice. It is a decision to reduce the cognitive resources available to solve the problem, while simultaneously producing a strategy that the wider organization will need to be convinced to execute.
There is an important caveat in the diversity research: the benefits of cognitive diversity are not automatic. The same studies that document the speed and quality advantage of diverse teams also identify a failure mode — unmanaged cognitive diversity produces coordination failure, interpersonal conflict, and communication breakdown. The relationship between diversity and performance is an inverted U-shape. More diversity helps, until the conditions for integration break down.
Google's Project Aristotle — a landmark longitudinal study of internal team performance — identified the moderating condition: psychological safety. Teams where members felt safe to take interpersonal risks, surface dissenting views, and admit uncertainty without fear of penalty accounted for 43% of the variance in team performance. Those teams completed projects 19% faster and demonstrated 31% more innovation than teams with lower safety scores.
Psychological safety is not the goal of a well-designed strategic session. It is the prerequisite. Without it, cognitive diversity produces noise. With it, diversity produces strategy. At Mind Meeting Group, building that condition — through deliberate session design, facilitation architecture, and structured participation mechanisms — is not incidental to the process. It is the process.
Merck's launch of Keytruda, the oncology therapy that became one of the most commercially successful drugs in history, illustrates what happens when an organization correctly identifies the human infrastructure required for a complex challenge. Keytruda's success depended on the widespread adoption of a novel PD-L1 biomarker test — a workflow that pathologists, medical societies, and payers had never used before. Rather than treating this as an analytical problem to be solved by internal teams, Merck convened those actors before launch. They co-created testing workflows and reimbursement pathways. The strategy was built in the room with the people who would have to execute it.
AstraZeneca's launch of Farxiga for heart failure illustrates what happens when the contrast is made visible. Rather than treating payer and physician adoption as a downstream execution problem, AstraZeneca convened cardiology societies, heart failure specialists, and payer medical directors before the launch strategy was finalized. The indication was novel — SGLT2 inhibitors had not previously been associated with heart failure outcomes — which meant the entire prescribing and coverage infrastructure needed to be built simultaneously. By bringing those actors into the strategic process early, AstraZeneca achieved reimbursement and formulary positioning that significantly outpaced comparable launches in the same class.
The difference between these two outcomes is not analytical capability. It is the architecture of who was in the room, and when.
Mark McCarvill is the Founder and Principal Facilitator of Mind Meeting Group, a Vancouver-based consulting firm specializing in complex, multi-stakeholder strategy. He has facilitated over 100 decision-grade workshops across life sciences, federal government, not-for-profit, and commercial sectors, aligning more than 3,000 leaders and stakeholders. MMG's methodology is grounded in complexity science, organizational behaviour research, and fifteen years of practice in high-stakes strategic alignment.
Mind Meeting Group
The Process · Research
There is a comfortable assumption embedded in how most organizations approach high-stakes strategy: if you gather enough data and put the right experts in front of it, good decisions will follow. The research does not support this. In a McKinsey Quarterly study of 1,048 major business decisions — spanning M&A, capital expenditure, and new product launches — researchers Dan Lovallo and Olivier Sibony found that process quality explained far more of the variance in decision outcomes than the quantity or quality of analysis.
The finding carries an important nuance. It does not mean analysis is irrelevant — almost no decisions made through a rigorous process were backed by poor analysis. The reason: a well-structured process is precisely what surfaces and corrects weak analysis. The reverse is not true. Brilliant analysis, fed into an unstructured group deliberation, is routinely overridden by the loudest voice, the most senior person, or the most emotionally compelling narrative.
In the same McKinsey survey, only 28% of executives said the quality of strategic decisions in their companies was generally good. Sixty percent believed bad decisions were about as frequent as good ones. The culprit is not a shortage of smart people or rigorous data. It is the cognitive architecture of unstructured group deliberation itself — the systematic biases that inflate confidence, suppress dissent, anchor teams to the status quo, and cause individuals to defer to authority rather than evidence.
Lovallo and Sibony classify these into five families: pattern-recognition biases (including confirmation bias), action-oriented biases (overconfidence, planning fallacy), stability biases (anchoring, loss aversion, sunk-cost fallacy), interest biases (silo thinking, misaligned incentives), and social biases (conformity, authority deference). Each family is active in any conventional strategy session. Without structural mechanisms to surface and counteract them, they do not cancel out — they compound.
This is precisely why putting diverse, well-intentioned, intelligent people in a room and asking them to deliberate freely is not sufficient. It is often exactly the condition in which social biases are most powerful: when participants know the views of the most senior person in the room, discussion becomes performative rather than generative. An absence of dissent is not a sign of alignment — it is a warning sign of social bias at work.
One of the most empirically validated structural interventions is the pre-mortem, championed by research psychologist Gary Klein and explicitly cited in the McKinsey behavioral strategy framework as a tool to counter action-oriented biases. The mechanism is straightforward: before committing to a strategy, the group imagines it is twelve months in the future and the initiative has failed. They generate causes of that failure.
The technique works because of prospective hindsight — the mind generates far richer causal explanations when anchored to an imagined outcome than when assessing a plan abstractly. The pre-mortem surfaces risks that participants were privately aware of but felt socially unsafe to raise. A 2024 experimental study published in IEEE Engineering Management Review confirmed that the pre-mortem significantly mitigates planning fallacy, overconfidence, and groupthink, producing substantially more complete risk identification than standard review processes.
McKinsey's behavioral strategy framework identifies depersonalising debate as a core mechanism for countering social bias. The principle: genuine dissent requires not just permission but structure. When disagreement is expressed through a formal role — a designated devil's advocate, a war game opponent, a red team — it is no longer a personal challenge to authority. It is a process requirement.
The Alan Turing Institute's 2020 analysis of structured decision-making under uncertainty identified red-teaming as a critical mechanism for improving strategy robustness in precisely this way. It gives participants with dissenting views a legitimate structural channel, rather than requiring them to break social norms to surface disagreement. In multi-stakeholder environments with veto players — the Village Problem context — this matters doubly. Stakeholders who are not given a structured mechanism to raise concerns will exercise their veto later, through passive non-compliance or selective execution.
Most strategic planning produces what might be called polite alignment: broad agreement on principles, zero agreement on what to stop doing or what to do first. McKinsey's research on stability biases explains why: anchoring to last year's budget allocations, loss aversion around existing programmes, and sunk-cost reasoning conspire to prevent genuine prioritisation. Organisations end up endorsing all options and choosing none — which is indistinguishable from inaction in the field.
The solution is explicit forced prioritisation: a structural requirement that participants rank, vote on, or allocate finite resources across competing options. The Collective Impact framework, evaluated in the Stanford Social Innovation Review, found that when diverse stakeholders were convened without this structural mechanism, they consistently produced shared concern lists rather than actionable commitments. Initiatives that embedded structured shared measurement and explicit priority-setting — like the StrivePartnership coalition in Cincinnati — produced measurable improvements in graduation rates and literacy. The same stakeholders, meeting informally for years, had not moved the needle.
IBM's enterprise transformation illustrates what it looks like when process architecture is treated as a strategic asset rather than administrative overhead. By implementing Technology Business Management (TBM) — a structured framework that forced IT, Finance, and business units to adopt a shared diagnostic language and align around common metrics — IBM achieved $3.5 billion in productivity gains. The success was not primarily technological. It was the structured resolution of incentive misalignment across historically siloed functions, using a shared process that made trade-offs visible and decisions legible across the organisation.
McKinsey's behavioral strategy research points to exactly this: the most effective debiasing interventions are not one-off techniques applied in a single meeting. They are embedded in formal operating procedures — recurring processes that teams practice with sufficient regularity to build the mutual trust and shared language that allow rigorous debate to occur without damaging relationships.
The word "structure" often triggers concern in high-stakes strategy conversations: that it will constrain thinking, narrow the solution space, or elevate process compliance over strategic insight. The McKinsey research directly addresses this objection. The best decision processes do not suppress judgment — they create the conditions for judgment to be heard. They prevent the most politically convenient option from defaulting as strategy. They ensure that the analysis that was commissioned actually influences the decision it was commissioned to inform.
At Mind Meeting Group, process architecture is not the scaffolding around the real work. It is the mechanism that converts a room full of well-intentioned, analytically capable people into a group that can make decisions that survive contact with execution reality. The room provides the raw material. The process is what makes it into strategy.
Mark McCarvill is the Founder and Principal Facilitator of Mind Meeting Group, a Vancouver-based consulting firm specializing in complex, multi-stakeholder strategy. He has facilitated over 100 decision-grade workshops across life sciences, federal government, not-for-profit, and commercial sectors, aligning more than 3,000 leaders and stakeholders. MMG's methodology is grounded in complexity science, organizational behaviour research, and fifteen years of practice in high-stakes strategic alignment.
If the ideas above map to a challenge you are carrying, the next step is a direct conversation.
Mind Meeting Group
Reading the Research · April 2026
The May/June 2026 issue of Harvard Business Review contains four articles that, taken separately, each offer useful advice for leaders navigating an unusually demanding moment. Read together, they make a single, more important argument: the gap between a strategy that looks good on paper and one that actually holds under pressure is almost never the quality of the plan. It is the quality of the human system built to carry it.
Ron Friedman's research on superteams — drawn from a survey of more than 6,000 knowledge workers across industries — identifies three traits that separate the highest-performing teams from everyone else: they get more done, their members actively make one another better, and they keep improving over time. It is that third trait, continuous improvement, that proves hardest to manufacture and easiest to lose.
These are not personality traits. They are structural conditions — built deliberately by leaders who treat growth as a daily practice, not an annual event.
Eric Solomon and Anup Srivastava write about a phenomenon most senior leaders recognize but few name directly: strategy now feels less like running a marathon on a clear day and more like sprinting through fog while the track shifts beneath your feet. Three forces are colliding simultaneously — policy volatility that arrives via social media before legal teams can assess it, AI infiltrating every workflow faster than most organizations can develop doctrine for it, and geopolitical fragmentation breaking apart the integrated global operating model most large organizations were built on.
The organization gets busier and simultaneously more aimless. The leaders who navigate this period well won't be the ones who feel less fear. They'll be the ones who converted it into focus before it converted them into firefighters.
Research published in the Journal of Political Economy followed a government agency that assigned frontline staff to a 120-hour upskilling program. Four to six months later, trained workers' performance had risen roughly 10%. But the most meaningful impact appeared one level above the trainees. Using email metadata to map communication patterns, researchers found that trained employees had sharply reduced the volume of messages they sent upward — meaning managers spent far less time on clarifications and escalations that had previously consumed their days. The training's biggest beneficiary wasn't the person who attended it.
The implication matters for how organizations justify development programs. Measuring training ROI purely through the trainee's individual productivity misses the most valuable effect: the bandwidth freed at the managerial level, where even small gains in attention generate outsized strategic value.
Two studies published in the Journal of Applied Psychology examined which leadership behaviors best supported employees through a disruptive return to work. Across both studies — one with 658 employees, a second with 1,607 at a large pharmaceutical company — the result was consistent: employees who worked for humble leaders were significantly less likely to quit and significantly more likely to perform well in the months following disruption. Employees with less humble leaders were actually quitting at higher rates.
What humble leaders did differently was not simply express more empathy. They acknowledged the distress the disruption had caused and then collaborated with their teams to find approaches that fit individual needs. Named distress plus shared ownership of the response — that combination produced the organizational connection that kept people from leaving.
Each article addresses a different domain: team performance, crisis leadership, workforce development, leadership style. But the mechanism they describe is the same. The organizations and leaders that perform through difficulty are not the ones that had better plans. They are the ones that built the conditions for honest thinking, shared ownership, and continuous adaptation — before the difficulty arrived.
The leaders who bring their village into the room to cocreate strategy aren't doing it despite the pressure they're under. They're doing it because of it. They get a better plan, a more capable team, and people who are already bought in before the work even starts. The research is consistent on this point: the strategy and the development happen at the same time, or neither happens at all.
Mark McCarvill is the Founder and Principal Facilitator of Mind Meeting Group, a Vancouver-based consulting firm specializing in complex, multi-stakeholder strategy. He has facilitated over 100 decision-grade workshops across life sciences, federal government, not-for-profit, and commercial sectors, aligning more than 3,000 leaders and stakeholders. MMG's methodology is grounded in complexity science, organizational behaviour research, and fifteen years of practice in high-stakes strategic alignment.