Transformer Scaling Law: Unified Theory of Learning and Generalization

Published:Dec 26, 2025 17:20
1 min read
ArXiv

Analysis

This paper provides a theoretical framework for understanding the scaling laws of transformer-based language models. It moves beyond empirical observations and toy models by formalizing learning dynamics as an ODE and analyzing SGD training in a more realistic setting. The key contribution is a characterization of generalization error convergence, including a phase transition, and the derivation of isolated scaling laws for model size, training time, and dataset size. This work is significant because it provides a deeper understanding of how computational resources impact model performance, which is crucial for efficient LLM development.

Reference

The paper establishes a theoretical upper bound on excess risk characterized by a distinct phase transition. In the initial optimization phase, the excess risk decays exponentially relative to the computational cost. However, once a specific resource allocation threshold is crossed, the system enters a statistical phase, where the generalization error follows a power-law decay of Θ(C−1/6).