I’ve spent the past few months experimenting and researching here and there with tiny and small language models, e.g. running log analysis on edge devices, processing audio in remote locations where connectivity is spotty, power is low and the environment harsh. They’re fast, efficient, and honestly? Pretty fun to work with and research. But lately, I’ve caught myself asking: Am I actually solving a problem here – or just doing something because it’s technically interesting? If you’re working with AI in any capacity, you’ve probably felt this tension too (and to be honest, sometimes because something is technically interesting, that can be a good enough reason for personal research).
