This workshop aims to bring together researchers and practitioners from computer vision, machine learning, and physics-based modeling to discuss the latest advancements in video generation and restoration. The workshop will emphasize the importance of integrating physical constraints and real-world priors into generative video models to ensure realism, consistency, and applicability in diverse domains.
Physics-aware video generation and restoration are fundamental for applications where realistic motion, temporal consistency, and adherence to physical laws are critical, including:
- Autonomous driving: Accurate motion forecasting and scene reconstruction
- Medical imaging: High-fidelity generation/restoration for better diagnostics
- Scientific simulations: Data-driven video generation for physics-based modeling
- Streaming, AR/VR, and gaming: Real-time video enhancement for immersive experiences
- Surveillance and forensics: Reconstruction of occluded or degraded video
- Infant/toddler monitoring: Detecting and restoring subtle movements in low-quality recordings
- Elderly care and assistive technology: Enhancing visibility and understanding of movement patterns