Search:
Match:
1 results
Paper#LLM🔬 ResearchAnalyzed: Jan 3, 2026 16:28

AFA-LoRA: Enhancing LoRA with Non-Linear Adaptations

Published:Dec 27, 2025 04:12
1 min read
ArXiv

Analysis

This paper addresses a key limitation of LoRA, a popular parameter-efficient fine-tuning method: its linear adaptation process. By introducing AFA-LoRA, the authors propose a method to incorporate non-linear expressivity, potentially improving performance and closing the gap with full-parameter fine-tuning. The use of an annealed activation function is a novel approach to achieve this while maintaining LoRA's mergeability.
Reference

AFA-LoRA reduces the performance gap between LoRA and full-parameter training.