GenAI is an accelerant. It speeds up decisions, output creation, and information flow, often without strengthening the system underneath. And many organisations are already running “hot”: highly optimised, tightly interconnected, little slack, and dependent on tacit knowledge.
So the real question isn’t just “How much can we automate?” It’s also “Where does speed strengthen the system – and where does speed increase fragility?”
What is it?
This is a practical way to choose the right amount of human-in-the-loop (HITL) oversight when you automate a workflow with GenAI.
It combines two ideas:
The Three Zones of AI Engagement
- Acceleration Zone: low-risk, repeatable, reversible work → minimise friction, maximise flow
- Deliberation Zone: high-impact, irreversible work → preserve friction, require reflection
- Exploration Zone: ambiguous, experimental work → variable friction, guided learning
A Risk × Friction decision tool
Using a simple risk rating (Probability × Impact), you classify a use case as low / medium / high risk, then match it to an appropriate friction level ( and I will send you the spreadsheet to do the assessments):
- Low friction: minimal HITL (spot checks, logging, basic guardrails)
- Medium friction: targeted HITL (review-by-exception, approvals for commitments)
- High friction: full HITL (human pre-approval before outputs create obligations)
There’s also a gating checklist to stop the most common mistake: quietly placing a use case in a lower-friction box before you’ve earned the right to do so.
The core principle is simple: friction isn’t bureaucracy – it’s a resilience mechanism. In high-impact workflows, friction slows the system down long enough for judgement, context, and ethical review to re-enter the flow.
What does it mean from a business perspective?
Overall it helps manage risk.
- GenAI doesn’t just “increase productivity”: It can amplify fragility that already exists.
- Acceleration without friction increases tail-risk events: Small mistakes can propagate faster than humans can intervene.
- Not all work should run at the same speed: Some workflows need more human presence, not less.
- The biggest risk is silent over-adoption: “AI everywhere” without structure, boundaries, or escalation paths in complex inter-departmental processes.
- Your highest-risk areas are usually boring: procurement, hiring, finance approvals, contracts, regulated communications – the places where errors become commitments.
- Governance doesn’t need to be heavy: When friction is embedded into workflows (reviews, approvals, thresholds, logs), you reduce risk without creating a mire policy.
- Done well, this protects trust: Internally (staff confidence) and externally (customers, citizens, regulators, partners). Trust is expensive to rebuild.
What do I do with it?
- Inventory your “GenAI candidates”: (the complex workflows people are already trying to automate, not the everyday copilot scenarios – formally or informally).
- Map each workflow to a zone: (Acceleration / Deliberation / Exploration).
- Score risk as Probability × Impact: Sssign a default friction level (low / medium / high).
- Apply the “Impact override” mindset: if impact is high, default to high friction until you have strong evidence it can be reduced safely.
- Add friction intentionally in the Deliberation Zone: e.g. required human review, slower approvals, clear labelling, and boundaries on what AI cannot generate.
- Use the Acceleration Zone to build capability safely: pick low-risk, reversible workflows that free capacity without increasing exposure.
- Run the gating checklist before lowering friction: Are outputs non-binding? Are errors reversible? Are guardrails proven? Is monitoring in place?.
- Write an AI Intent Statement: Easy to remember and easy to apply: “We introduce AI where it strengthens the system – and we add friction where it protects the system.”
- Monitor and adjust regularly: Fragility changes over time, so should your friction (also as your experience grows with the tools your perspective may change).
If you’re automating with GenAI, the goal isn’t to remove humans – it’s to remove the right human effort, in the right places, while keeping the system stable as speed increases.
If you’d like my Risk × Friction / HITL assessment spreadsheet (including the risk matrix and gating checklist), comment “Friction” or DM me and I’ll share it.
Further Reading
GenAI Workflows – Sometimes Friction is Good (… and systems are fragile)
