We are moving from “Active AI” (where you ask a bot to do something) to “Ambient AI” (where the system anticipates the need and does it for you) – another way of putting this is Autonomous Agents running in the background, becoming ambient.
Tag: #AgenticAI
When AI Agents Stop Being a Project and Start Being Headcount
A LinkedIn post from Clark University’s advancement team stopped me mid-scroll – not because “7 AI agents” is technically significant, but because it’s a new kind of organizational announcement. They describe software components the way you’d describe hires: clear roles, scopes, budgets, governance, and “human oversight”… plus an explicit boundary around relationship work.
It’s a glimpse of how automation can be socialized inside organizations.
Agentic Coding Is a Power Tool. Don’t Use It Like a Glue Gun.
Agentic coding tools (like Claude Code, OpenAI’s Codex agents) are making it ridiculously easy to turn an idea into working software. That’s exciting. It’s also where people can get into trouble – especially when non-developers or non-solution designers use these tools to build systems they can’t confidently secure, test, operate, or maintain.
Below is a pragmatic way to think about agentic tools: when they’re a superpower, when they’re a liability, and how to get value without accidentally creating a future incident (or an unmaintainable mess).
Claude Cowork – Before You Install an AI “Coworker”: Treat Agentic Tools Like Privileged Access
The newest wave of “desktop automation” tools look genuinely useful – and materially different from the assistants we’ve gotten used to. Tools like Claude Cowork and agentic browsers such as Perplexity Comet and ChatGPT Atlas don’t just answer questions; they can take actions across your files, tabs, and workflows. That shift changes the risk profile, fast.
Risk × Friction: How Much Human Oversight Should You Remove with GenAI?
GenAI is an accelerant. It speeds up decisions, output creation, and information flow, often without strengthening the system underneath. And many organisations are already running “hot”: highly optimised, tightly interconnected, little slack, and dependent on tacit knowledge.
So the real question isn’t just “How much can we automate?” It’s also “Where does speed strengthen the system – and where does speed increase fragility?”
2025 Was Supposed to Be the Year of Agents – Is 2026 the Turning Point?
Back in 2024 (it seems so long ago now) I wrote about Agents (links below) and cautioned about how early we were in their evolution. Now, almost a year later we seem to be in a completely different place – brought back to my mind to revisit by the recent announcements from:
– Langflow – releasing v1.6
– Microsoft – consolidating AutoGen and Semantic Kernel into the Microsoft Agent Framework
– OpenAI – releasing AgentKit
