If you follow the GenAI conversation closely, it can feel like whiplash – like there is no agreement. One day it’s “AI is rewriting the economy,” the next it’s “AI is all hype and risk.” It feels like we’re now in a “Great Divergence” – not just differing opinions, but two parallel realities shaped by incentives and where you sit in the organization (something I have seen in my own work – some organisations are embracing AI and realizing the benefits while other are flatly, not interested).
What is it?
In 2026, GenAI adoption is being framed through two conflicting (and often talking-past-each-other) narratives:
1) The executive/vendor narrative (pro-adoption, urgency-driven)
- The core message is essentially: “If you aren’t using AI, you are dying.”
- It’s reinforced by rapid cost declines and benchmark gains, plus macro stats and competitive pressure.
- The blind spot: it often glosses over the “last mile” implementation friction, hidden operational costs (“technical inflation”), and reputational risk from hallucinations.
2) The operational/practitioner narrative (skeptical, friction-driven)
- The core message: AI is powerful, but it’s “a power tool without a safety guard.”
- Teams live with integration realities: probabilistic, non-deterministic models embedded into deterministic legacy environments.
- They see “productivity paradox” dynamics and “pilot purgatory,” plus “shadow AI” and low-quality “AI slop.”
Overlay that with a broader “vibe shift” – fatigue with hype, and declining trust driven by degraded digital experiences (spam, slop, frustrating bots) – are we in the Gartner “Trough of Disillusionment”?
What does it mean from a business perspective?
When you set aside the hype and the fear, the practical takeaway is that GenAI is creating real opportunities – but only for organizations willing to manage the operational, data, and risk realities that come with it.
- You will hear confident claims on both sides – and both will sound “data-backed.” The divergence is structural (incentives, vantage point, accountability), not just opinion.
- Individual efficiency gains won’t automatically translate into organizational throughput. The bottleneck often shifts (e.g., code review load rises as code generation becomes easier).
- The real barrier is frequently not the model. It’s integration and “AI-ready” data along with Organisational Change Management.
- Security and confidentiality risk is, or should be, a board-level issue, not a policy footnote. Strict bans often fail, creating “shadow AI” behavior – including employees putting sensitive information into public tools.
- Trust is fragile, and small failure rates create large reputational impact at scale. This is especially true in customer-facing autonomous systems.
- Regulation is becoming a forcing function. As regulation starts to become more prevalent organisations will simply have to adapt – plan with regulation in mind.
- The “next hype frontier” (Agentic AI) is arriving before most organisations have solved the basics. Before most organisations have truly got on with AI adoption the hype around Agents may actually start to meet reality in certain applications (we saw Agent hype at the end of 2024 but the supporting software just wasn’t ready then – we are seeing maturity in this sector now).
What do I do with it?
The goal isn’t just to “adopt AI” broadly; it’s also to take a small number of risk-adjusted, high-value steps that build capability and trust while avoiding expensive missteps. Adopt a pragmatic, risk-aware & maturity driven approach – treat AI like high-performance industrial machinery: powerful, but requiring skilled operators and safety protocols.
- Pick “low-regret” starter bets first (and avoid high-regret autonomy). Examples of good starters: unit tests, documentation, code explanation; marketing drafts; agent-assist for customer support; summarizing contracts (internal process applications).
- Design a “guarded workflows,” not “unleashed AI” approach. Prioritize constrained, domain-specific workflows and strong operational discipline.
- Create a sanctioned sandbox to reduce shadow AI while managing risk. Give teams a safe environment to experiment without dragging every test through a long procurement cycle (and keep production data out of it).
- Measure value in outcomes, not activity. Shift away from vanity metrics (like volume) toward business throughput and shipped value, because review burden and downstream quality issues can erase gains.
- Treat data and integration as first-class workstreams. If your “AI-ready data” and integration layer aren’t improving, your AI program will repeatedly stall regardless of how good the models get.
The AI adoption conversation is loud, polarized, and often unhelpful – but it’s also revealing. The winners won’t be the loudest cheerleaders or the most entrenched skeptics. They’ll be the organizations that build disciplined, risk-aware, value & maturity-driven adoption: start with low-regret use cases, instrument outcomes, reduce shadow AI, and operationalize governance and integration.
Further Reading
Optimistic, with Exceptions: Leaders’ Views on Generative AI in 2025 – Russell Reynolds Associates
