For a long time, software decisions have been framed as a fairly binary choice: do we build something ourselves, or do we buy it from a vendor? That framing still exists, but needs to be expanded. With the rise of AI coding tools, workflow orchestration, and the possibility of systems that can generate logic at runtime, the choices has become far broader – and far more interesting.
Tag: #GenerativeAI
When AI Agents Stop Being a Project and Start Being Headcount
A LinkedIn post from Clark University’s advancement team stopped me mid-scroll – not because “7 AI agents” is technically significant, but because it’s a new kind of organizational announcement. They describe software components the way you’d describe hires: clear roles, scopes, budgets, governance, and “human oversight”… plus an explicit boundary around relationship work.
It’s a glimpse of how automation can be socialized inside organizations.
GenAI at Work – Listening to Concerns and Leading with Clarity
Rolling out Generative AI in the workplace is more about people than platforms. Over the past year and half, I’ve helped a number of organisations launch GenAI initiatives – and nearly every one of them has surfaced questions, worries, or resistance from staff (with some common themes). These concerns are not signs of failure; they’re signs that people are paying attention. In this article, I want to share the most common concerns I’ve encountered – and how organisations can respond in ways that build trust, not tension.
Agentic Coding Is a Power Tool. Don’t Use It Like a Glue Gun.
Agentic coding tools (like Claude Code, OpenAI’s Codex agents) are making it ridiculously easy to turn an idea into working software. That’s exciting. It’s also where people can get into trouble – especially when non-developers or non-solution designers use these tools to build systems they can’t confidently secure, test, operate, or maintain.
Below is a pragmatic way to think about agentic tools: when they’re a superpower, when they’re a liability, and how to get value without accidentally creating a future incident (or an unmaintainable mess).
Risk × Friction: How Much Human Oversight Should You Remove with GenAI?
GenAI is an accelerant. It speeds up decisions, output creation, and information flow, often without strengthening the system underneath. And many organisations are already running “hot”: highly optimised, tightly interconnected, little slack, and dependent on tacit knowledge.
So the real question isn’t just “How much can we automate?” It’s also “Where does speed strengthen the system – and where does speed increase fragility?”
How I Cut Drafting RFP Responses from Hours and Days to Minutes with Multi-Agent Orchestration
Responding to RFPs used to feel like running a marathon (it’s just as painful as being on the RFP assessment team) – days of effort, multiple people, and thousands in costs. Recently, I asked myself: Could AI make this easier? What started as an experiment (we are always experimenting with the edge of this technology) with Microsoft’s Agent Framework on a local setup evolved into a multi-agent orchestration system that drafts RFP responses in under 15 minutes.
GenAI Is a Powerful Hammer – Not Everything is a Nail
Generative AI is everywhere and it’s tempting to reach for it whenever something feels messy, slow, or frustrating.
But when a tool is this powerful – and this non-deterministic – the real question isn’t “Can we use GenAI?” It’s “Should we?”
Used well, GenAI boosts productivity. Used indiscriminately, it quietly introduces risk.
This is where GenAI stops being just a productivity tool and starts becoming a governance challenge.
When Prompts Feel Like Programming Blindfolded
After more than a year, on and off, building agents across LangFlow, Microsoft Agent Framework, and Copilot Studio – from PoCs to my own real-world deployments – one theme keeps nagging at me: prompt debugging feels like a black box adventure.
In traditional software development, you can step through the code, trace errors, and monitor state changes with powerful tools. But with natural language programming? You’re trusting your instructions to a probabilistic model whose reasoning you rarely get to see.
And that changes everything.
GenAI Workflows – Sometimes Friction is Good (… and systems are fragile)
One of the challenges about GenAI adoption is simply getting started: picking tools, running pilots, training staff, and rolling out a plan. Another major challenge is where and how GenAI gets introduced into already fragile, tightly coupled organisational systems.
I was watching a Veritasium video (The Strange Math That Predicts (Almost) Anything) about complex systems and the moment they reach a “critical state.” A forest can look calm and stable right up until a single spark turns it into a massive wildfire. Not because the spark was special but because the system was already primed for runaway behaviour.
In a general sense, many organisations today look just like that forest, in a critical state.
Reading LLMs Like Patients: What DSM-5 Can Teach Us About AI Behaviour
Most of the time, when we talk about large language models (LLMs), we end up in the weeds of training data and parameter counts. Useful if you’re a researcher; less useful if you’re a leader, policymaker, or practitioner trying to answer a simpler question:
“Is this thing actually behaving in a way I’m comfortable with?”
Two realities make that hard:
The training data is too large for humans to grasp in any meaningful way.
The models are too complex for us to truly understand their internal “decision making.”
But their outputs – the words they put on the page – are something we can read, interrogate, and assess.
