Author: Steve Harris

When AI Agents Stop Being a Project and Start Being Headcount

A LinkedIn post from Clark University’s advancement team stopped me mid-scroll – not because “7 AI agents” is technically significant, but because it’s a new kind of organizational announcement. They describe software components the way you’d describe hires: clear roles, scopes, budgets, governance, and “human oversight”… plus an explicit boundary around relationship work.

It’s a glimpse of how automation can be socialized inside organizations.

GenAI at Work – Listening to Concerns and Leading with Clarity

Rolling out Generative AI in the workplace is more about people than platforms. Over the past year and half, I’ve helped a number of organisations launch GenAI initiatives – and nearly every one of them has surfaced questions, worries, or resistance from staff (with some common themes). These concerns are not signs of failure; they’re signs that people are paying attention. In this article, I want to share the most common concerns I’ve encountered – and how organisations can respond in ways that build trust, not tension.

Agentic Coding Is a Power Tool. Don’t Use It Like a Glue Gun.

Agentic coding tools (like Claude Code, OpenAI’s Codex agents) are making it ridiculously easy to turn an idea into working software. That’s exciting. It’s also where people can get into trouble – especially when non-developers or non-solution designers use these tools to build systems they can’t confidently secure, test, operate, or maintain.

Below is a pragmatic way to think about agentic tools: when they’re a superpower, when they’re a liability, and how to get value without accidentally creating a future incident (or an unmaintainable mess).

GenAI Adoption Opinions Seem Polarized – Pragmatism Will Win

If you follow the GenAI conversation closely, it can feel like whiplash – like there is no agreement. One day it’s “AI is rewriting the economy,” the next it’s “AI is all hype and risk.” It feels like we’re now in a “Great Divergence” – not just differing opinions, but two parallel realities shaped by incentives and where you sit in the organization (something I have seen in my own work – some organisations are embracing AI and realizing the benefits while other are flatly, not interested).

Claude Cowork – Before You Install an AI “Coworker”: Treat Agentic Tools Like Privileged Access

The newest wave of “desktop automation” tools look genuinely useful – and materially different from the assistants we’ve gotten used to. Tools like Claude Cowork and agentic browsers such as Perplexity Comet and ChatGPT Atlas don’t just answer questions; they can take actions across your files, tabs, and workflows. That shift changes the risk profile, fast.

Risk × Friction: How Much Human Oversight Should You Remove with GenAI?

GenAI is an accelerant. It speeds up decisions, output creation, and information flow, often without strengthening the system underneath. And many organisations are already running “hot”: highly optimised, tightly interconnected, little slack, and dependent on tacit knowledge.

So the real question isn’t just “How much can we automate?” It’s also “Where does speed strengthen the system – and where does speed increase fragility?”

How I Cut Drafting RFP Responses from Hours and Days to Minutes with Multi-Agent Orchestration

Responding to RFPs used to feel like running a marathon (it’s just as painful as being on the RFP assessment team) – days of effort, multiple people, and thousands in costs. Recently, I asked myself: Could AI make this easier? What started as an experiment (we are always experimenting with the edge of this technology) with Microsoft’s Agent Framework on a local setup evolved into a multi-agent orchestration system that drafts RFP responses in under 15 minutes.

GenAI Is a Powerful Hammer – Not Everything is a Nail

Generative AI is everywhere and it’s tempting to reach for it whenever something feels messy, slow, or frustrating.

But when a tool is this powerful – and this non-deterministic – the real question isn’t “Can we use GenAI?” It’s “Should we?”

Used well, GenAI boosts productivity. Used indiscriminately, it quietly introduces risk.

This is where GenAI stops being just a productivity tool and starts becoming a governance challenge.

When Prompts Feel Like Programming Blindfolded

After more than a year, on and off, building agents across LangFlow, Microsoft Agent Framework, and Copilot Studio – from PoCs to my own real-world deployments – one theme keeps nagging at me: prompt debugging feels like a black box adventure.

In traditional software development, you can step through the code, trace errors, and monitor state changes with powerful tools. But with natural language programming? You’re trusting your instructions to a probabilistic model whose reasoning you rarely get to see.

And that changes everything.

GenAI Workflows – Sometimes Friction is Good (… and systems are fragile)

One of the challenges about GenAI adoption is simply getting started: picking tools, running pilots, training staff, and rolling out a plan. Another major challenge is where and how GenAI gets introduced into already fragile, tightly coupled organisational systems.

I was watching a Veritasium video (The Strange Math That Predicts (Almost) Anything) about complex systems and the moment they reach a “critical state.” A forest can look calm and stable right up until a single spark turns it into a massive wildfire. Not because the spark was special but because the system was already primed for runaway behaviour.

In a general sense, many organisations today look just like that forest, in a critical state.