Hallucination-Resistant Decoding for LVLMs

Paper#llm🔬 Research|Analyzed: Jan 3, 2026 16:06
Published: Dec 29, 2025 13:23
1 min read
ArXiv

Analysis

This paper addresses a critical problem in Large Vision-Language Models (LVLMs): hallucination. It proposes a novel, training-free decoding framework, CoFi-Dec, that leverages generative self-feedback and coarse-to-fine visual conditioning to mitigate this issue. The approach is model-agnostic and demonstrates significant improvements on hallucination-focused benchmarks, making it a valuable contribution to the field. The use of a Wasserstein-based fusion mechanism for aligning predictions is particularly interesting.
Reference / Citation
View Original
"CoFi-Dec substantially reduces both entity-level and semantic-level hallucinations, outperforming existing decoding strategies."
A
ArXivDec 29, 2025 13:23
* Cited for critical analysis under Article 32.