2025 Year in Review: Old NLP Methods Quietly Solving Problems LLMs Can't
Published:Dec 24, 2025 12:57
•1 min read
•r/MachineLearning
Analysis
This article highlights the resurgence of pre-transformer NLP techniques in addressing limitations of large language models (LLMs). It argues that methods like Hidden Markov Models (HMMs), Viterbi algorithm, and n-gram smoothing, once considered obsolete, are now being revisited to solve problems where LLMs fall short, particularly in areas like constrained decoding, state compression, and handling linguistic variation. The author draws parallels between modern techniques like Mamba/S4 and continuous HMMs, and between model merging and n-gram smoothing. The article emphasizes the importance of understanding these older methods for tackling the "jagged intelligence" problem of LLMs, where they excel in some areas but fail unpredictably in others.
Key Takeaways
- •Pre-transformer NLP techniques are making a comeback.
- •LLMs have limitations that older methods can address.
- •Understanding classic NLP is crucial for improving LLM performance.
Reference
“The problems Transformers can't solve efficiently are being solved by revisiting pre-Transformer principles.”