Search:
Match:
2 results
Paper#llm🔬 ResearchAnalyzed: Jan 3, 2026 16:06

Hallucination-Resistant Decoding for LVLMs

Published:Dec 29, 2025 13:23
1 min read
ArXiv

Analysis

This paper addresses a critical problem in Large Vision-Language Models (LVLMs): hallucination. It proposes a novel, training-free decoding framework, CoFi-Dec, that leverages generative self-feedback and coarse-to-fine visual conditioning to mitigate this issue. The approach is model-agnostic and demonstrates significant improvements on hallucination-focused benchmarks, making it a valuable contribution to the field. The use of a Wasserstein-based fusion mechanism for aligning predictions is particularly interesting.
Reference

CoFi-Dec substantially reduces both entity-level and semantic-level hallucinations, outperforming existing decoding strategies.

Research#Segmentation🔬 ResearchAnalyzed: Jan 10, 2026 11:47

Novel Approach to Out-of-Distribution Segmentation Using Wasserstein Uncertainty

Published:Dec 12, 2025 08:36
1 min read
ArXiv

Analysis

This research explores a novel method for identifying out-of-distribution data in image segmentation using Wasserstein-based evidential uncertainty. The approach likely addresses a critical challenge in deploying segmentation models in real-world scenarios where unexpected data is encountered.
Reference

The article's source is ArXiv.