Research#llm🔬 Research分析: 2025年12月25日 09:28

Data-Free Pruning of Self-Attention Layers in LLMs

发布:2025年12月25日 05:00
1分で読める
ArXiv ML

分析

This paper introduces Gate-Norm, a novel method for pruning self-attention layers in large language models (LLMs) without requiring any training data. The core idea revolves around the \

要点

    引用

    Pruning $8$--$16$ attention sublayers yields up to $1.30\times$ higher inference throughput while keeping average zero-shot accuracy within $2\%$ of the unpruned baseline.