Research Paper#Parameter-Efficient Fine-tuning, Lottery Ticket Hypothesis, Low-Rank Adaptation🔬 ResearchAnalyzed: Jan 3, 2026 19:58
Winning Tickets in Low-Rank Adapters
Published:Dec 27, 2025 06:39
•1 min read
•ArXiv
Analysis
This paper investigates the Lottery Ticket Hypothesis (LTH) in the context of parameter-efficient fine-tuning (PEFT) methods, specifically Low-Rank Adaptation (LoRA). It finds that LTH applies to LoRAs, meaning sparse subnetworks within LoRAs can achieve performance comparable to dense adapters. This has implications for understanding transfer learning and developing more efficient adaptation strategies.
Key Takeaways
- •LTH holds within LoRAs, revealing sparse subnetworks that can match the performance of dense adapters.
- •The effectiveness of sparse subnetworks depends more on sparsity level per layer than specific weights.
- •Proposed Partial-LoRA reduces trainable parameters by up to 87% while maintaining or improving accuracy.
- •The findings deepen understanding of transfer learning and pretraining/fine-tuning interplay.
Reference
“The effectiveness of sparse subnetworks depends more on how much sparsity is applied in each layer than on the exact weights included in the subnetwork.”