Category: AI Adoption

Risk × Friction: How Much Human Oversight Should You Remove with GenAI?

GenAI is an accelerant. It speeds up decisions, output creation, and information flow, often without strengthening the system underneath. And many organisations are already running “hot”: highly optimised, tightly interconnected, little slack, and dependent on tacit knowledge.

So the real question isn’t just “How much can we automate?” It’s also “Where does speed strengthen the system – and where does speed increase fragility?”

GenAI Is a Powerful Hammer – Not Everything is a Nail

Generative AI is everywhere and it’s tempting to reach for it whenever something feels messy, slow, or frustrating.

But when a tool is this powerful – and this non-deterministic – the real question isn’t “Can we use GenAI?” It’s “Should we?”

Used well, GenAI boosts productivity. Used indiscriminately, it quietly introduces risk.

This is where GenAI stops being just a productivity tool and starts becoming a governance challenge.

When Prompts Feel Like Programming Blindfolded

After more than a year, on and off, building agents across LangFlow, Microsoft Agent Framework, and Copilot Studio – from PoCs to my own real-world deployments – one theme keeps nagging at me: prompt debugging feels like a black box adventure.

In traditional software development, you can step through the code, trace errors, and monitor state changes with powerful tools. But with natural language programming? You’re trusting your instructions to a probabilistic model whose reasoning you rarely get to see.

And that changes everything.

GenAI Workflows – Sometimes Friction is Good (… and systems are fragile)

One of the challenges about GenAI adoption is simply getting started: picking tools, running pilots, training staff, and rolling out a plan. Another major challenge is where and how GenAI gets introduced into already fragile, tightly coupled organisational systems.

I was watching a Veritasium video (The Strange Math That Predicts (Almost) Anything) about complex systems and the moment they reach a “critical state.” A forest can look calm and stable right up until a single spark turns it into a massive wildfire. Not because the spark was special but because the system was already primed for runaway behaviour.

In a general sense, many organisations today look just like that forest, in a critical state.

GenAI for Small Business: Why the Adoption Journey Looks Different

All organisations seem to be dealing with the same question – “Where do we even start with GenAI?”

But the context behind that question is very different. In large organisations, there are budgets, teams, governance committees, structured programs and projects. In small businesses, there’s you, a small team, and the pressure of everyday operations.

This article looks at why GenAI adoption isn’t just a scaled-down version of enterprise AI adoption – and why small businesses need a different, more streamlined approach.

Could Microsoft’s Researcher Agent Signal the End of My Copilot Studio M365 Research Agents?

In the ever changing world of enterprise GenAI, the new Researcher Agent functionality in Microsoft 365 Copilot started me questioning whether I should retire my own Copilot Studio developed M365 Research Agent. So, I tested it and really only found one minor flaw (that I couldn’t select sub-folders from SharePoint sites).

2025 Was Supposed to Be the Year of Agents – Is 2026 the Turning Point?

Back in 2024 (it seems so long ago now) I wrote about Agents (links below) and cautioned about how early we were in their evolution. Now, almost a year later we seem to be in a completely different place – brought back to my mind to revisit by the recent announcements from:

– Langflow – releasing v1.6

– Microsoft – consolidating AutoGen and Semantic Kernel into the Microsoft Agent Framework

– OpenAI – releasing AgentKit

The Real Test of GenAI: Are We Solving Problems or Just Playing with Tech?

I’ve spent the past few months experimenting and researching here and there with tiny and small language models, e.g. running log analysis on edge devices, processing audio in remote locations where connectivity is spotty, power is low and the environment harsh. They’re fast, efficient, and honestly? Pretty fun to work with and research. But lately, I’ve caught myself asking: Am I actually solving a problem here – or just doing something because it’s technically interesting? If you’re working with AI in any capacity, you’ve probably felt this tension too (and to be honest, sometimes because something is technically interesting, that can be a good enough reason for personal research).