Hallucination-Resistant Decoding for LVLMs
Analysis
Key Takeaways
- •Proposes CoFi-Dec, a training-free decoding framework to reduce hallucinations in LVLMs.
- •Employs coarse-to-fine visual conditioning and generative self-feedback.
- •Uses a Wasserstein-based fusion mechanism for prediction alignment.
- •Demonstrates improved performance on hallucination-focused benchmarks.
- •Model-agnostic and can be applied to a wide range of LVLMs.
“CoFi-Dec substantially reduces both entity-level and semantic-level hallucinations, outperforming existing decoding strategies.”