Search:
Match:
1 results
Research#llm📝 BlogAnalyzed: Jan 3, 2026 06:22

Some Math behind Neural Tangent Kernel

Published:Sep 8, 2022 17:00
1 min read
Lil'Log

Analysis

The article introduces the Neural Tangent Kernel (NTK) as a tool to understand the behavior of over-parameterized neural networks during training. It highlights the ability of these networks to achieve good generalization despite fitting training data perfectly, even with more parameters than data points. The article promises a deep dive into the motivation, definition, and convergence properties of NTK, particularly in the context of infinite-width networks.
Reference

Neural networks are well known to be over-parameterized and can often easily fit data with near-zero training loss with decent generalization performance on test dataset.