AFA-LoRA: Enhancing LoRA with Non-Linear Adaptations

Paper#LLM🔬 Research|Analyzed: Jan 3, 2026 16:28
Published: Dec 27, 2025 04:12
1 min read
ArXiv

Analysis

This paper addresses a key limitation of LoRA, a popular parameter-efficient fine-tuning method: its linear adaptation process. By introducing AFA-LoRA, the authors propose a method to incorporate non-linear expressivity, potentially improving performance and closing the gap with full-parameter fine-tuning. The use of an annealed activation function is a novel approach to achieve this while maintaining LoRA's mergeability.
Reference / Citation
View Original
"AFA-LoRA reduces the performance gap between LoRA and full-parameter training."
A
ArXivDec 27, 2025 04:12
* Cited for critical analysis under Article 32.