TV-RAG: Enhancing Long Video Understanding with Temporal and Semantic Awareness
Analysis
This paper addresses the limitations of Large Video Language Models (LVLMs) in handling long videos. It proposes a training-free architecture, TV-RAG, that improves long-video reasoning by incorporating temporal alignment and entropy-guided semantics. The key contributions are a time-decay retrieval module and an entropy-weighted key-frame sampler, allowing for a lightweight and budget-friendly upgrade path for existing LVLMs. The paper's significance lies in its ability to improve performance on long-video benchmarks without requiring retraining, offering a practical solution for enhancing video understanding capabilities.
Key Takeaways
- •Proposes TV-RAG, a training-free architecture for long video understanding.
- •Employs a time-decay retrieval module for temporal alignment.
- •Utilizes an entropy-weighted key-frame sampler for semantic awareness.
- •Offers a lightweight and budget-friendly upgrade path for existing LVLMs.
- •Achieves state-of-the-art performance on long-video benchmarks.
“TV-RAG realizes a dual-level reasoning routine that can be grafted onto any LVLM without re-training or fine-tuning.”