LLMs Perform Better with 'Familiar Words' Over 'Smart Words' ~ Adam's Law ~

Research#llm📝 Blog|Analyzed: Apr 12, 2026 23:15
Published: Apr 12, 2026 23:13
1 min read
Qiita AI

Analysis

A fascinating new study reveals that Large Language Models (LLMs) perform significantly better when prompted with commonly used phrases rather than sophisticated vocabulary. This breakthrough highlights the enduring importance of prompt engineering, proving that matching a model's training data frequency can drastically boost reasoning and translation accuracy. It's an exciting reminder that speaking a model's natural language is the key to unlocking its full potential!
Reference / Citation
View Original
"If the meaning is the same, choosing the 'high-frequency expressions' found in the LLM's training data improves performance. Moreover, accuracy improved by up to 8 points in mathematical reasoning tasks, and scores increased in 99 out of 100 language pairs in machine translation."
Q
Qiita AIApr 12, 2026 23:13
* Cited for critical analysis under Article 32.