Buying a Generative AI solution, whether it is a discrete or embedded GenAI solution, isn’t like buying a CRM or ERP system. It’s a whole new ballgame, one where you can’t always see the rules, and the players (the models) can sometimes make up their own. GenAI procurement requires a fresh playbook. Let’s break down what’s changing, why it matters, and how you can stay ahead.
Category: Risk Management
Unlocking the Power of Generative AI: Why Your Organisation Needs a Maturity and Readiness Assessment
Generative AI (GenAI) is revolutionising industries by automating content creation, enhancing decision-making, and sparking innovation. But before jumping in, it’s essential to pause and do the groundwork. In today’s fast-moving, hype-driven tech landscape, that can feel a bit old-school, but understanding your organisation’s readiness isn’t just prudent, it’s foundational. A GenAI Maturity and Readiness Assessment is a crucial step to ensure that the excitement of innovation is backed by the stability of preparation.
Mitigating Risks in LLMs: How Observability Enhances AI Reliability
Agentic AI and LLM tools enable remarkable capabilities, from automating workflows to generating content and insights. As I spend more time in Langflow and really start to appreciate the power of the systems that can be developed I started wondering about how they could be monitored, how do we implement Observability – then a note about Datadog LLM Observability came across my feed and got me thinking that this is worth looking at more deeply.
GenAI Projects: 30% will be dropped by the end of 2025…. hmmm… is that really a problem?
A week or two ago I came across an article from Business Standard that summarised a Gartner report suggesting that over 30% of GenAI projects won’t survive beyond proof of concept (PoC) and will be dropped by the end of 2025. Having run a large project portfolio I’m always interested in stats like this so I decided to pick at this a little and see whether this is indeed an issue, or just a headline.
The Role of Unions in an Agentic AI World
The upcoming rise of Agentic AI – AI that operates more like an independent worker than a traditional tool will fundamentally transform the landscape of work. This evolution prompts an important question: what role do unions play in a world where AI acts as a virtual employee?
HR’s Role in the World of Agentic AI: Shaping the Future of Virtual Employees
The world of AI is evolving rapidly, moving from passive tools to dynamic, “agentic” AI – technology that can operate autonomously, making decisions, interacting with employees, and handling tasks like a true virtual team member. While this shift brings exciting opportunities for efficiency it also brings new challenges for oversight, ethics, and integration into workplace culture. HR stands at the heart of this, ensuring that these “virtual employees” align with company values, policies, and workforce goals.
The GenAI Skills Gap Seems Real: Are most people just getting started?
Am I living in a GenAI echo chamber? While my LinkedIn feed overflows with the latest AI breakthroughs and ‘must-try’ features, my experience in the trenches tells a different story. As a volunteer leading GenAI projects, delivering prompt engineering training and talking about GenAI in the non-profit sector, I’ve witnessed a gulf between the breathless pace of AI innovation and how most people actually use these tools day-to-day. (I have to say that I have not noticed resistance, concern – yes, but not resistance and in all cases I see the ‘wow’ moment happen when people realise the possibilities and practical applications.)
LLM Security – The OWASP Top 10 for LLMs & What You Need to Know
As AI continues to revolutionise industries, understanding and mitigating the security challenges around large language models (LLM’s) is critical. The OWASP Top 10 for LLM’s is a comprehensive guide to the most pressing risks faced by these models.
Identifying AI Risks: A New Tool for Businesses
Understanding the risks in any organisation or project takes time and usually involves one or more risk workshops, more often than not starting with a blank sheet of paper. Massachusetts Institute of Technology (MIT) have provided us with a short cut to identify risks associated with artificial intelligence using a new resource, the AI Risk Repository – save time and improve the breadth and depth of risks.
AI’s Role in Reducing Risk in the SDLC (e.g. CrowdStrike)
In the wake of the recent CrowdStrike incident it’s easy to become an armchair critic. For those with experience in IT, isn’t it likely that such issues are multi-dimensional, spanning technical, managerial, cultural, and even simple human errors?