AFA-LoRA: Enhancing LoRA with Non-Linear Adaptations
Published:Dec 27, 2025 04:12
•1 min read
•ArXiv
Analysis
This paper addresses a key limitation of LoRA, a popular parameter-efficient fine-tuning method: its linear adaptation process. By introducing AFA-LoRA, the authors propose a method to incorporate non-linear expressivity, potentially improving performance and closing the gap with full-parameter fine-tuning. The use of an annealed activation function is a novel approach to achieve this while maintaining LoRA's mergeability.
Key Takeaways
- •AFA-LoRA enhances LoRA by introducing non-linear expressivity.
- •The method uses an annealed activation function for adaptation.
- •AFA-LoRA aims to close the performance gap between LoRA and full-parameter training.
- •The approach maintains LoRA's mergeability.
Reference
“AFA-LoRA reduces the performance gap between LoRA and full-parameter training.”