Tag: #ResponsibleAI

When AI Agents Stop Being a Project and Start Being Headcount

A LinkedIn post from Clark University’s advancement team stopped me mid-scroll – not because “7 AI agents” is technically significant, but because it’s a new kind of organizational announcement. They describe software components the way you’d describe hires: clear roles, scopes, budgets, governance, and “human oversight”… plus an explicit boundary around relationship work.

It’s a glimpse of how automation can be socialized inside organizations.

GenAI at Work – Listening to Concerns and Leading with Clarity

Rolling out Generative AI in the workplace is more about people than platforms. Over the past year and half, I’ve helped a number of organisations launch GenAI initiatives – and nearly every one of them has surfaced questions, worries, or resistance from staff (with some common themes). These concerns are not signs of failure; they’re signs that people are paying attention. In this article, I want to share the most common concerns I’ve encountered – and how organisations can respond in ways that build trust, not tension.

Risk × Friction: How Much Human Oversight Should You Remove with GenAI?

GenAI is an accelerant. It speeds up decisions, output creation, and information flow, often without strengthening the system underneath. And many organisations are already running “hot”: highly optimised, tightly interconnected, little slack, and dependent on tacit knowledge.

So the real question isn’t just “How much can we automate?” It’s also “Where does speed strengthen the system – and where does speed increase fragility?”

GenAI Is a Powerful Hammer – Not Everything is a Nail

Generative AI is everywhere and it’s tempting to reach for it whenever something feels messy, slow, or frustrating.

But when a tool is this powerful – and this non-deterministic – the real question isn’t “Can we use GenAI?” It’s “Should we?”

Used well, GenAI boosts productivity. Used indiscriminately, it quietly introduces risk.

This is where GenAI stops being just a productivity tool and starts becoming a governance challenge.

When Prompts Feel Like Programming Blindfolded

After more than a year, on and off, building agents across LangFlow, Microsoft Agent Framework, and Copilot Studio – from PoCs to my own real-world deployments – one theme keeps nagging at me: prompt debugging feels like a black box adventure.

In traditional software development, you can step through the code, trace errors, and monitor state changes with powerful tools. But with natural language programming? You’re trusting your instructions to a probabilistic model whose reasoning you rarely get to see.

And that changes everything.

Reading LLMs Like Patients: What DSM-5 Can Teach Us About AI Behaviour

Most of the time, when we talk about large language models (LLMs), we end up in the weeds of training data and parameter counts. Useful if you’re a researcher; less useful if you’re a leader, policymaker, or practitioner trying to answer a simpler question:

“Is this thing actually behaving in a way I’m comfortable with?”

Two realities make that hard:

The training data is too large for humans to grasp in any meaningful way.

The models are too complex for us to truly understand their internal “decision making.”

But their outputs – the words they put on the page – are something we can read, interrogate, and assess.

RFP Automation and Local AI: What Microsoft’s New Agent Framework (MAF) Means for Business

I’ve been experimenting with Microsoft’s new Agent Framework (MAF) – but instead of connecting to cloud systems, I’ve been running it entirely offline on an Amazon EC2, private cloud, instance. My goal was to see whether this new, unified framework could function offline, be used with offline LLM’s and process PDFs (of RFPs in this case), extract questions, and even draft answers – all without leaving a secure, private environment.

It worked remarkably well. But what’s even more interesting is what this means for organizations on multiple fronts: the ability to run sophisticated Agent workflows locally, maintain full control of data, and start automating complex knowledge tasks such as RFP responses, compliance checks, or policy reviews.

GenAI Skills Gap: Why Businesses Can’t Wait for Education Institutions

As educational establishments seemingly wrestle with how, or if, Generative AI (GenAI) should be formally integrated into their curricula, the conversation seems to circle around a familiar tension: education versus training (I’d love to hear from people embedded in the education space for their opinion).

Should STEM degrees remain focused on deep technical foundations, or adapt to include the practical AI skills employers well expect? One promising middle ground is adding humanities courses that sharpen critical thinking, ethics, and communication – capabilities essential for using AI responsibly. The challenge is finding the right balance so educational establishments can preserve their mission to educate while preparing graduates for the realities of an AI-enabled workplace.

GenAI Procurement: Why It’s Not Business as Usual

Buying a Generative AI solution, whether it is a discrete or embedded GenAI solution, isn’t like buying a CRM or ERP system. It’s a whole new ballgame, one where you can’t always see the rules, and the players (the models) can sometimes make up their own. GenAI procurement requires a fresh playbook. Let’s break down what’s changing, why it matters, and how you can stay ahead.