Search:
Match:
1 results

Analysis

This paper addresses the critical issue of why different fine-tuning methods (SFT vs. RL) lead to divergent generalization behaviors in LLMs. It moves beyond simple accuracy metrics by introducing a novel benchmark that decomposes reasoning into core cognitive skills. This allows for a more granular understanding of how these skills emerge, transfer, and degrade during training. The study's focus on low-level statistical patterns further enhances the analysis, providing valuable insights into the mechanisms behind LLM generalization and offering guidance for designing more effective training strategies.
Reference

RL-tuned models maintain more stable behavioral profiles and resist collapse in reasoning skills, whereas SFT models exhibit sharper drift and overfit to surface patterns.