Generating the Past, Present and Future from a Motion-Blurred Image
Published:Dec 24, 2025 05:00
•1 min read
•ArXiv Vision
Analysis
This paper presents a novel approach to motion blur deconvolution by leveraging pre-trained video diffusion models. The key innovation lies in repurposing these models, trained on large-scale datasets, to not only reconstruct sharp images but also to generate plausible video sequences depicting the scene's past and future. This goes beyond traditional deblurring techniques that primarily focus on restoring image clarity. The method's robustness and versatility, demonstrated through its superior performance on challenging real-world images and its support for downstream tasks like camera trajectory recovery, are significant contributions. The availability of code and data further enhances the reproducibility and impact of this research. However, the paper could benefit from a more detailed discussion of the computational resources required for training and inference.
Key Takeaways
- •Motion blur can be used to infer past and future scene dynamics.
- •Pre-trained video diffusion models can be repurposed for motion deblurring and video generation.
- •The method outperforms previous techniques and generalizes well to real-world images.
Reference
“We introduce a new technique that repurposes a pre-trained video diffusion model trained on internet-scale datasets to recover videos revealing complex scene dynamics during the moment of capture and what might have occurred immediately into the past or future.”