SpaceTimePilot: Generative Video Rendering with Space-Time Control
Analysis
Key Takeaways
- •Introduces SpaceTimePilot, a video diffusion model for controllable generative rendering.
- •Achieves space-time disentanglement, allowing independent control of camera viewpoint and motion.
- •Proposes a temporal-warping training scheme to address data scarcity.
- •Introduces CamxTime, a synthetic dataset for space-time video trajectories.
- •Demonstrates strong results compared to prior work.
“SpaceTimePilot can independently alter the camera viewpoint and the motion sequence within the generative process, re-rendering the scene for continuous and arbitrary exploration across space and time.”