The Real Test of GenAI: Are We Solving Problems or Just Playing with Tech?

I’ve spent the past few months experimenting and researching here and there with tiny and small language models, e.g. running log analysis on edge devices, processing audio in remote locations where connectivity is spotty, power is low and the environment harsh. They’re fast, efficient, and honestly? Pretty fun to work with and research. But lately, I’ve caught myself asking: Am I actually solving a problem here – or just doing something because it’s technically interesting? If you’re working with AI in any capacity, you’ve probably felt this tension too (and to be honest, sometimes because something is technically interesting, that can be a good enough reason for personal research).

What is it?

Smaller language models represent a fundamental shift in how we can deploy AI. Unlike their cloud-dependent cousins, these are lightweight, efficient models that run directly on edge devices – even in environments with minimal connectivity or computing power.

What makes them particularly interesting is the speed of experimentation. You can prototype a working solution really quickly instead of spending weeks on infrastructure and setup. This accessibility is transformative – it lets teams test ideas quickly and cheaply.

But here’s the catch: when the barrier to building drops this low, it becomes dangerously easy to build without purpose. The same accessibility that enables rapid innovation can lead us into “tech for tech’s sake” territory. We start chasing what’s technically interesting rather than what’s actually needed.

What does it mean from a business perspective?

For leaders evaluating AI investments – whether you’re funding them, leading them, or trying to make sense of them – this accessibility creates both opportunity and risk:

  • Innovation without impact is expensive distraction. That demo that impressed the executive team only creates value if it moves a metric that matters – revenue, cost, time, quality, or customer satisfaction.
  • Opportunity cost compounds quickly. Every hour your team spends on an unfocused AI experiment is an hour not spent solving the problems that keep your customers or operations struggling.
  • Problem clarity beats technical sophistication. The most successful AI initiatives don’t start with “What can this technology do?” They start with “We waste 10 hours a week on this process” or “We lose customers because we can’t respond fast enough.”
  • Experimentation builds organizational capability – but only if captured. Having said all of the above – sometimes we do not even appreciate what is possible without experimenting. Even failed projects create value when teams document learnings, develop AI literacy, and identify what questions to ask next. The key is turning experiments into institutional knowledge, not isolated adventures.
  • Experimentation builds organizational adaptability. Companies that experiment regularly aren’t just likely to discover new solutions – they build the organizational muscle to adopt change more easily. Teams become comfortable with uncertainty, develop rapid learning cycles, and create feedback loops that make the next change initiative smoother than the last.

What do I do with it?

Here’s how to channel AI experimentation toward real business impact while still preserving the creative exploration that drives innovation:

  • Write the problem statement first. Before evaluating any AI tool, articulate the specific pain point in one clear sentence. If you can’t describe what’s broken, costly, or slow, you’re not ready to build, or evaluate, a solution.
  • Create bounded exploration spaces. Give teams permission to experiment, but with clear timeboxes (2 weeks, not 2 months), defined success criteria, and alignment to strategic priorities. Curiosity is valuable – aimless wandering is not.
  • Define success metrics upfront. Before starting any AI project, agree on how you’ll measure impact: hours saved per week, error rate reduction, customer response time, cost per transaction. If you can’t measure it, you can’t manage it.
  • Build your AI learning library. Create a shared repository where teams document every experiment – what was tried, what worked, what failed, and most importantly, why. Your organization’s next AI initiative should be informed by all previous attempts, not starting from scratch.
  • Start with high-friction, high-frequency problems. Look for tasks your team does repeatedly that consume disproportionate time or cause consistent frustration. These are your best candidates for AI solutions because the impact is measurable and the ROI is clear.
  • Treat experimentation as change readiness training. Every AI experiment, successful or not, is practice for organizational change. Teams learn to work across silos, get comfortable with ambiguity, and develop rapid iteration skills. These capabilities transfer to any future transformation initiative, making your organization more adaptable over time.

Have you ever built something that felt like a solution searching for a problem? What brought you back to centre?