The newest wave of “desktop automation” tools look genuinely useful – and materially different from the assistants we’ve gotten used to. Tools like Claude Cowork and agentic browsers such as Perplexity Comet and ChatGPT Atlas don’t just answer questions; they can take actions across your files, tabs, and workflows. That shift changes the risk profile, fast.
Insights
Risk × Friction: How Much Human Oversight Should You Remove with GenAI?
GenAI is an accelerant. It speeds up decisions, output creation, and information flow, often without strengthening the system underneath. And many organisations are already running “hot”: highly optimised, tightly interconnected, little slack, and dependent on tacit knowledge.
So the real question isn’t just “How much can we automate?” It’s also “Where does speed strengthen the system – and where does speed increase fragility?”
How I Cut Drafting RFP Responses from Hours and Days to Minutes with Multi-Agent Orchestration
Responding to RFPs used to feel like running a marathon (it’s just as painful as being on the RFP assessment team) – days of effort, multiple people, and thousands in costs. Recently, I asked myself: Could AI make this easier? What started as an experiment (we are always experimenting with the edge of this technology) with Microsoft’s Agent Framework on a local setup evolved into a multi-agent orchestration system that drafts RFP responses in under 15 minutes.
GenAI Is a Powerful Hammer – Not Everything is a Nail
Generative AI is everywhere and it’s tempting to reach for it whenever something feels messy, slow, or frustrating.
But when a tool is this powerful – and this non-deterministic – the real question isn’t “Can we use GenAI?” It’s “Should we?”
Used well, GenAI boosts productivity. Used indiscriminately, it quietly introduces risk.
This is where GenAI stops being just a productivity tool and starts becoming a governance challenge.
When Prompts Feel Like Programming Blindfolded
After more than a year, on and off, building agents across LangFlow, Microsoft Agent Framework, and Copilot Studio – from PoCs to my own real-world deployments – one theme keeps nagging at me: prompt debugging feels like a black box adventure.
In traditional software development, you can step through the code, trace errors, and monitor state changes with powerful tools. But with natural language programming? You’re trusting your instructions to a probabilistic model whose reasoning you rarely get to see.
And that changes everything.
GenAI Workflows – Sometimes Friction is Good (… and systems are fragile)
One of the challenges about GenAI adoption is simply getting started: picking tools, running pilots, training staff, and rolling out a plan. Another major challenge is where and how GenAI gets introduced into already fragile, tightly coupled organisational systems.
I was watching a Veritasium video (The Strange Math That Predicts (Almost) Anything) about complex systems and the moment they reach a “critical state.” A forest can look calm and stable right up until a single spark turns it into a massive wildfire. Not because the spark was special but because the system was already primed for runaway behaviour.
In a general sense, many organisations today look just like that forest, in a critical state.
Reading LLMs Like Patients: What DSM-5 Can Teach Us About AI Behaviour
Most of the time, when we talk about large language models (LLMs), we end up in the weeds of training data and parameter counts. Useful if you’re a researcher; less useful if you’re a leader, policymaker, or practitioner trying to answer a simpler question:
“Is this thing actually behaving in a way I’m comfortable with?”
Two realities make that hard:
The training data is too large for humans to grasp in any meaningful way.
The models are too complex for us to truly understand their internal “decision making.”
But their outputs – the words they put on the page – are something we can read, interrogate, and assess.
GenAI for Small Business: Why the Adoption Journey Looks Different
All organisations seem to be dealing with the same question – “Where do we even start with GenAI?”
But the context behind that question is very different. In large organisations, there are budgets, teams, governance committees, structured programs and projects. In small businesses, there’s you, a small team, and the pressure of everyday operations.
This article looks at why GenAI adoption isn’t just a scaled-down version of enterprise AI adoption – and why small businesses need a different, more streamlined approach.
Could Microsoft’s Researcher Agent Signal the End of My Copilot Studio M365 Research Agents?
In the ever changing world of enterprise GenAI, the new Researcher Agent functionality in Microsoft 365 Copilot started me questioning whether I should retire my own Copilot Studio developed M365 Research Agent. So, I tested it and really only found one minor flaw (that I couldn’t select sub-folders from SharePoint sites).
RFP Automation and Local AI: What Microsoft’s New Agent Framework (MAF) Means for Business
I’ve been experimenting with Microsoft’s new Agent Framework (MAF) – but instead of connecting to cloud systems, I’ve been running it entirely offline on an Amazon EC2, private cloud, instance. My goal was to see whether this new, unified framework could function offline, be used with offline LLM’s and process PDFs (of RFPs in this case), extract questions, and even draft answers – all without leaving a secure, private environment.
It worked remarkably well. But what’s even more interesting is what this means for organizations on multiple fronts: the ability to run sophisticated Agent workflows locally, maintain full control of data, and start automating complex knowledge tasks such as RFP responses, compliance checks, or policy reviews.
