
AI in Business: Three Decisions Only the CEO Can Make

Why most AI strategies fail to create value, and what it demands from leadership
Half of all CEOs believe their job is on the line if AI doesn't deliver concrete results in 2026, according to BCG's latest leadership survey. AI budgets are doubling (from 0.8% to 1.7% of revenues on average), proofs of concept are multiplying, but measurable returns remain marginal. The problem is not technological or financial. It is decisional. Three critical trade-offs determine whether an AI deployment generates value or burns resources without impact. And none of them can be delegated to the CTO, the CDO, or an external vendor. They fall to the CEO, because they commit the organization's strategy, identity, and governance.
This article details these three decisions, the most common mistakes made by leadership teams, and the concrete frameworks to resolve them.
First Decision: Where to Deploy AI (and More Importantly, Where Not To)
The Trap of Blanket Deployment
The most expensive mistake is spreading AI across every business process. It is the organizational equivalent of installing safety netting across an entire mountain: technically possible, financially ruinous, and operationally counterproductive. The cost of maintenance, supervision, and error correction quickly exceeds projected gains. Forrester confirms this in its 2026 analysis: AI remains stuck in "efficiency mode" in most organizations. Technology leaders, incentivized to optimize costs, deliver incremental productivity gains without real transformation. More concerning still, business partners cannot articulate what they want from AI beyond saving money.
A CEO who delegates the "where" question to IT mechanically gets a strategy centered on efficiency, not value.
The Decision Framework: Surgical Deployment
The strategic question is not "how do I deploy AI across my entire process?" but "what specific problem can AI solve where it will reduce errors and amplify my distinctive value?" The most operative analogy is a targeted safety net. A net placed at the right point protects a critical passage without weighing down the entire infrastructure. A net placed everywhere turns the mountain into a permanent construction site.
In practice, the CEO must identify friction points in the value chain: where human error is costly, where repetition destroys competence, where processing time creates competitive disadvantage. In an energy distribution network, the solution is not automating the entire customer service operation. It is accelerating the regulatory information retrieval that blocks first-contact resolution. In an accounting firm, it is not replacing the client relationship with a machine. It is reinforcing the compliance verification process that generates manual errors.
The decision criterion is straightforward: AI should be deployed only where it enables a competent employee to do better what they already know how to do. Not where it compensates for a missing skill. A tool that multiplies an existing competence produces value. A tool that compensates for incompetence produces noise, sometimes dangerous.
Concrete Implications for the Executive Committee
Before approving any AI project, the CEO should require three answers: what existing business competence will this deployment amplify? What specific performance indicator will be impacted? And what is the total cost of human supervision for the deployed system? If the third answer exceeds the expected gain, the project does not hold.
Second Decision: What AI Must Protect (The Organization's DNA)
The Invisible Threat: Homogenization
Every company possesses implicit "social virtues," the behaviors, attitudes, and standards that define its market identity. The absolute rigor of an audit firm. The empathetic listening of a healthcare network. The raw responsiveness of a tech startup. These virtues appear in no org chart, but they determine customer loyalty and internal cohesion.
Poorly deployed AI homogenizes these virtues. It produces standardized responses, uniform processes, smoothed interactions. The result: the organization gradually loses what distinguishes it from competitors. And the market's response is not standardized. It is brutal and economic. Customers do not forgive a brand for becoming interchangeable.
The Decision Framework: Identity Amplification
The right question for the CEO is not "how will AI make my organization more efficient?" but "how will AI make my organization more itself?" If the virtue of an accountant is clockwork precision, AI should enable that accountant to achieve a level of accuracy impossible by hand. If the virtue of a sales rep is relational intuition, AI should provide meeting preparation so comprehensive that intuition operates on better-mapped terrain.
This shift in perspective changes the entire tool and use-case selection framework. A tool that standardizes customer interactions in a sector where personalization is the cardinal virtue is not an efficiency gain: it is delayed value destruction. Organizations that centralize customer feedback with semantic analysis tools do so not to replace human listening, but to give teams a more complete picture of what customers actually express, so that decisions remain aligned with the brand promise.
Concrete Implications for the Executive Committee
Before any AI deployment, the CEO must formalize the organization's three to five "distinctive virtues." These are not values printed on a wall; they are the concrete behaviors that generate customer preference. Every AI project must then be evaluated on an additional axis: does this deployment amplify, preserve, or erode this virtue? If the answer is "erode," even marginally, the project must be reconfigured or abandoned.
Third Decision: How to Govern AI (The Ethical and Operational Framework)
The Error of a Charter Without Governance
Most organizations that have formalized an AI policy stopped at drafting a usage charter. This is the equivalent of writing traffic laws without creating a driver's license, without installing speed cameras, and without establishing penalties. The charter exists, nobody reads it, and practices diverge from one department to the next.
The issue runs deeper than compliance. AI amplifies responsibility; it does not dilute it. The more information a leader has access to (and AI provides it massively), the greater their accountability for decisions made using that information. "I didn't know" ceases to be a defensible position when the tool was available and the data accessible.
The Decision Framework: Tiered Governance
The driver's license analogy is operationally the most productive. No one would hand a Ferrari to a driver who just passed their test. Similarly, access to AI tools should be progressive, conditional on verified competence levels.
Three concrete tiers emerge in mature organizations. The first tier corresponds to operational users: they use AI within a predefined framework, with technical guardrails (locked prompts, filtered outputs, systematic human validation). The second tier corresponds to "fixers," employees capable of diagnosing an AI error, adjusting parameters, and training first-tier users. The third tier corresponds to "architects," those who design AI workflows, define use cases, and evaluate systemic risks.
This structure does not create a prestige hierarchy. It creates clarity. A first-tier operator is not inferior to an architect: they possess a business competence that AI amplifies. The architect possesses a technical competence that the business directs. Governance ensures that each person uses the tool at the level of what they master.
The "Council of Wisdom": A Discernment Body
Beyond access tiers, organizations that succeed with AI deployment create a cross-functional discernment body. Not a technical committee, not a compliance group, but a space where experienced profiles, aware of the tool's power, evaluate uses against criteria that go beyond immediate ROI: impact on internal culture, reputational risk, alignment with long-term strategy.
This body operates on a simple principle: just because something can be done does not mean it should be done. Medicine codified this principle as professional ethics decades ago. Surgery can technically perform virtually any procedure, but medical deontology sets limits that the mere criterion of feasibility cannot justify. AI in business deserves the same framework.
Concrete Implications for the Executive Committee
The CEO must structure three elements in parallel: a tiered access system (who uses what, and within what scope), a competence development program (how an employee progresses from one tier to the next), and a strategic discernment body (which evaluates edge cases and new uses before deployment). All three must be operational before the AI budget is committed at scale.
The Final Responsibility Remains Human
The corporate decision-making process follows three stages that moral philosophy formalized centuries ago: deliberation (gathering information), decision (making the call), and action (executing). AI excels at the first stage. It gathers, cross-references, and synthesizes information at a speed and scale no human can match. But the second stage, the decision itself, remains irreducibly human. To decide is to assume responsibility for the act. It is to answer the question "why did you make this choice?" with reasons that commit a person, not an algorithm.
The CEO who automates deliberation gains time. The one who automates the decision loses legitimacy. The difference between the two is not technical: it is existential. And it is precisely because it is existential that these three trade-offs cannot be delegated.
The Ultimate Guide to the Voice of the Customer 2025

Our articles for further exploration
A selection of resources to inform your CX decisions and share the approaches we develop with our clients.


