Search:
Match:
2 results
Paper#LLM🔬 ResearchAnalyzed: Jan 3, 2026 16:28

AFA-LoRA: Enhancing LoRA with Non-Linear Adaptations

Published:Dec 27, 2025 04:12
1 min read
ArXiv

Analysis

This paper addresses a key limitation of LoRA, a popular parameter-efficient fine-tuning method: its linear adaptation process. By introducing AFA-LoRA, the authors propose a method to incorporate non-linear expressivity, potentially improving performance and closing the gap with full-parameter fine-tuning. The use of an annealed activation function is a novel approach to achieve this while maintaining LoRA's mergeability.
Reference

AFA-LoRA reduces the performance gap between LoRA and full-parameter training.

Research#Sampling🔬 ResearchAnalyzed: Jan 10, 2026 09:37

New Bounds for Multimodal Sampling: Improving Efficiency

Published:Dec 19, 2025 12:11
1 min read
ArXiv

Analysis

This research explores improvements to sampling from multimodal distributions, a core challenge in many AI applications. The paper likely proposes a novel algorithm (Reweighted Annealed Leap-Point Sampler) and provides theoretical guarantees about its performance.
Reference

The research focuses on the Reweighted Annealed Leap-Point Sampler.