Search:
Match:
1 results
Research#llm📝 BlogAnalyzed: Dec 29, 2025 07:27

Training Data Locality and Chain-of-Thought Reasoning in LLMs with Ben Prystawski - #673

Published:Feb 26, 2024 19:17
1 min read
Practical AI

Analysis

This article summarizes a podcast episode from Practical AI featuring Ben Prystawski, a PhD student researching the intersection of cognitive science and machine learning. The core discussion revolves around Prystawski's NeurIPS 2023 paper, which investigates the effectiveness of chain-of-thought reasoning in Large Language Models (LLMs). The paper argues that the local structure within the training data is the crucial factor enabling step-by-step reasoning. The episode explores fundamental questions about LLM reasoning, its definition, and how techniques like chain-of-thought enhance it. The article provides a concise overview of the research and its implications.
Reference

Why think step by step? Reasoning emerges from the locality of experience.