SpaceTimePilot: Generative Video Rendering with Space-Time Control
Published:Dec 31, 2025 18:59
•1 min read
•ArXiv
Analysis
This paper introduces SpaceTimePilot, a novel video diffusion model that allows for independent manipulation of camera viewpoint and motion sequence in generated videos. The key innovation lies in its ability to disentangle space and time, enabling controllable generative rendering. The paper addresses the challenge of training data scarcity by proposing a temporal-warping training scheme and introducing a new synthetic dataset, CamxTime. This work is significant because it offers a new approach to video generation with fine-grained control over both spatial and temporal aspects, potentially impacting applications like video editing and virtual reality.
Key Takeaways
- •Introduces SpaceTimePilot, a video diffusion model for controllable generative rendering.
- •Achieves space-time disentanglement, allowing independent control of camera viewpoint and motion.
- •Proposes a temporal-warping training scheme to address data scarcity.
- •Introduces CamxTime, a synthetic dataset for space-time video trajectories.
- •Demonstrates strong results compared to prior work.
Reference
“SpaceTimePilot can independently alter the camera viewpoint and the motion sequence within the generative process, re-rendering the scene for continuous and arbitrary exploration across space and time.”