Posts

Showing posts from 2025

Statistical ML in the Age of LLMs - A Real-World Playbook

Image
  Introduction Does statistical machine learning still make sense in the age of LLMs? This isn’t an academic question, it shows up in product meetings, technical brainstorming, and post-mortems. Someone proposes, “Can we explore GPT/Claude/LLaMA or an open-source model or latest paper,” and the room tilts toward the shiny option. The clout around LLMs is real: they can feel like the easiest answer to every problem because the rapid pace of technical content and the flood of new tools creates FOMO. That pressure seeps into our technical decisions. I want to ground this in personal experience . In my product meetings, the proposal to use LLMs comes up all the time. Even when we have proven, homegrown models, the allure of LLMs often overshadows rational thinking. This is where you have to really dig in, understand the nitty-gritty, and validate the problem and solution from both a macro and micro perspective. That paragraph isn’t a lament, it’s the setup for this post. This isn’t n...