Blog

When Tiny Language Models Meet Machine Learning: Smarter Insights from Sensor Data

Every business that relies on equipment (often in remote or inhospitable environments) knows the challenge: sensors produce oceans of numbers, devices spit out cryptic logs, and your teams are left piecing it together under pressure. Machine learning (ML) has been great at crunching the numbers. Now, tiny language models (TLMs) like Gemma 3 270M provide an opportunity to take this one step further, reading the logs, interpreting anomalies, and explain issues in plain language. There appears to be real potential in combing these approaches. (For those more technically inclined I have included a conceptual design and explanation at the end of the article.)

Unlock the Power of Small: Fine-Tuning Gemma 3 270M (A Business Perspective)

We’re all familiar with the massive, powerful language models that run on vast server farms. What if the next big breakthrough in AI isn’t about being bigger, but smaller?

Over the weekend I fine-tuned Gemma 3 (270M) end-to-end—LoRA → merge → GGUF → Ollama and ran it locally. It wasn’t perfect (tbh, it was more of a learning exerciser to understand the process), but it was fast, inexpensive, and genuinely useful for narrow, domain-specific tasks. Here’s what tiny models are, why they matter to business, and how to get started without boiling the ocean.

Tiny Language Models, Big Impact: Why Google’s Gemma3 270M Matters for Business

Big AI models often steal the spotlight, but sometimes the smartest move is going smaller. Google’s new Gemma3 270M shows just how powerful a compact, efficient language model can be – especially when it runs offline, on low-power devices, or in remote locations. For businesses, this isn’t just a technical breakthrough; it’s a new frontier of opportunity.

GenAI Skills Gap: Why Businesses Can’t Wait for Education Institutions

As educational establishments seemingly wrestle with how, or if, Generative AI (GenAI) should be formally integrated into their curricula, the conversation seems to circle around a familiar tension: education versus training (I’d love to hear from people embedded in the education space for their opinion).

Should STEM degrees remain focused on deep technical foundations, or adapt to include the practical AI skills employers well expect? One promising middle ground is adding humanities courses that sharpen critical thinking, ethics, and communication – capabilities essential for using AI responsibly. The challenge is finding the right balance so educational establishments can preserve their mission to educate while preparing graduates for the realities of an AI-enabled workplace.

GenAI Training Falling Short? Why an Exploratory Mindset Beats Just “Knowing How to Use It”

Generative AI is becoming a staple in the modern workplace – but something’s not clicking. Despite the rollout of training programs and hands-on tools, it seems that some organisations still struggle to see meaningful impact. Why? Because knowing how to use GenAI isn’t the same as knowing how to work with it. I have been delivering training on GenAI for over a year now and the feature that stands out in the true adopters has been the Exploratory Mindset – it’s the mindset that really makes the difference.

Why embedded AI features may already be in your tools and how to manage the risk

You didn’t sign up for an AI platform but suddenly, your HR tool summarises resumes. Your file-sharing service suggests email replies and your CRM is auto-generating forecasts.

Welcome to the new world of silent AI rollouts, where vendors quietly add GenAI features to your software stack, often without clear notice, control, or consent. It’s not just a tech issue it’s a business, legal, and risk management issue.

The Ground Keeps Shifting: Why GenAI Feels So Unsettling Right Now

If you’ve been using GenAI tools like Microsoft Copilot or ChatGPT in your day-to-day work, you’ve probably had this experience: something that used to work, like a prompt you carefully refined, is suddenly behaving differently. Maybe it’s not as helpful. Maybe it’s giving unexpected results (that’s what happened to me this week). Maybe it just… stopped working entirely.

Unlock Your Legacy Code: The GenAI Shortcut for BAs & Devs

It happened to me a quite a few years ago when I resurrected some of my C code from the 90’s and brought it up to date and if you’re a Business Analyst or Developer, you’ve been there as well: trying to decipher a legacy system with outdated documentation and only a handful of power users to guide you. Traditionally, we’ve relied on user interviews and painstaking manual testing to map out functionality. Using LLM’s, combined with the more traditional methods can give us extra insight.

Momentum Over Magnitude – Rolling Out GenAI the Lean Way

When it comes to Generative AI, many organisations feel overwhelmed, that they need a massive, enterprise-wide initiative to get started, but you don’t. Whether you’re a small-to-medium enterprise (SME) or a single department within a larger organisation, you can begin your GenAI journey with a few focused steps. No massive enterprise-wide rollout project is required – just smart, strategic, thoughtful action in a quick-start approach.

Stay Focused: You’re Solving a Business Problem, Not Chasing the Next AI Trend

Being immersed in the world of AI can feel like being caught in a whirlwind, every week brings a new model, a fresh feature, or a must-try tool and the pace is not slowing down – it’s easy to get swept up in it all. That’s why I always value conversations with businesses that bring things back to what really matters, solving real problems. GenAI isn’t just the latest shiny object, it’s a powerful tool to unlock capacity and drive real value, when focused on a business problem.