Context Reduction in Language Model Probabilities
Published:Dec 29, 2025 18:12
•1 min read
•ArXiv
Analysis
This paper investigates the minimal context required to observe probabilistic reduction in language models, a phenomenon relevant to cognitive science. It challenges the assumption that whole utterances are necessary, suggesting that n-gram representations are sufficient. This has implications for understanding how language models relate to human cognitive processes and could lead to more efficient model analysis.
Key Takeaways
- •Focuses on the minimal context needed for probabilistic reduction.
- •Suggests n-grams are sufficient, challenging the need for whole utterances.
- •Relevant to understanding the relationship between language models and cognition.
Reference
“n-gram representations suffice as cognitive units of planning.”