Tag: #AIGovernance

When AI Agents Stop Being a Project and Start Being Headcount

A LinkedIn post from Clark University’s advancement team stopped me mid-scroll – not because “7 AI agents” is technically significant, but because it’s a new kind of organizational announcement. They describe software components the way you’d describe hires: clear roles, scopes, budgets, governance, and “human oversight”… plus an explicit boundary around relationship work.

It’s a glimpse of how automation can be socialized inside organizations.

GenAI Adoption Opinions Seem Polarized – Pragmatism Will Win

If you follow the GenAI conversation closely, it can feel like whiplash – like there is no agreement. One day it’s “AI is rewriting the economy,” the next it’s “AI is all hype and risk.” It feels like we’re now in a “Great Divergence” – not just differing opinions, but two parallel realities shaped by incentives and where you sit in the organization (something I have seen in my own work – some organisations are embracing AI and realizing the benefits while other are flatly, not interested).

Claude Cowork – Before You Install an AI “Coworker”: Treat Agentic Tools Like Privileged Access

The newest wave of “desktop automation” tools look genuinely useful – and materially different from the assistants we’ve gotten used to. Tools like Claude Cowork and agentic browsers such as Perplexity Comet and ChatGPT Atlas don’t just answer questions; they can take actions across your files, tabs, and workflows. That shift changes the risk profile, fast.

Risk × Friction: How Much Human Oversight Should You Remove with GenAI?

GenAI is an accelerant. It speeds up decisions, output creation, and information flow, often without strengthening the system underneath. And many organisations are already running “hot”: highly optimised, tightly interconnected, little slack, and dependent on tacit knowledge.

So the real question isn’t just “How much can we automate?” It’s also “Where does speed strengthen the system – and where does speed increase fragility?”

GenAI Is a Powerful Hammer – Not Everything is a Nail

Generative AI is everywhere and it’s tempting to reach for it whenever something feels messy, slow, or frustrating.

But when a tool is this powerful – and this non-deterministic – the real question isn’t “Can we use GenAI?” It’s “Should we?”

Used well, GenAI boosts productivity. Used indiscriminately, it quietly introduces risk.

This is where GenAI stops being just a productivity tool and starts becoming a governance challenge.

GenAI Workflows – Sometimes Friction is Good (… and systems are fragile)

One of the challenges about GenAI adoption is simply getting started: picking tools, running pilots, training staff, and rolling out a plan. Another major challenge is where and how GenAI gets introduced into already fragile, tightly coupled organisational systems.

I was watching a Veritasium video (The Strange Math That Predicts (Almost) Anything) about complex systems and the moment they reach a “critical state.” A forest can look calm and stable right up until a single spark turns it into a massive wildfire. Not because the spark was special but because the system was already primed for runaway behaviour.

In a general sense, many organisations today look just like that forest, in a critical state.

2025 Was Supposed to Be the Year of Agents – Is 2026 the Turning Point?

Back in 2024 (it seems so long ago now) I wrote about Agents (links below) and cautioned about how early we were in their evolution. Now, almost a year later we seem to be in a completely different place – brought back to my mind to revisit by the recent announcements from:

– Langflow – releasing v1.6

– Microsoft – consolidating AutoGen and Semantic Kernel into the Microsoft Agent Framework

– OpenAI – releasing AgentKit

Why embedded AI features may already be in your tools and how to manage the risk

You didn’t sign up for an AI platform but suddenly, your HR tool summarises resumes. Your file-sharing service suggests email replies and your CRM is auto-generating forecasts.

Welcome to the new world of silent AI rollouts, where vendors quietly add GenAI features to your software stack, often without clear notice, control, or consent. It’s not just a tech issue it’s a business, legal, and risk management issue.

The Ground Keeps Shifting: Why GenAI Feels So Unsettling Right Now

If you’ve been using GenAI tools like Microsoft Copilot or ChatGPT in your day-to-day work, you’ve probably had this experience: something that used to work, like a prompt you carefully refined, is suddenly behaving differently. Maybe it’s not as helpful. Maybe it’s giving unexpected results (that’s what happened to me this week). Maybe it just… stopped working entirely.