PAVER: Physics-Aware Video gEneration and Restoration

1st Workshop on Physics-Aware Video Generation and Restoration
at the 28th International Conference on Pattern Recognition

About the Workshop

This workshop aims to bring together researchers and practitioners from computer vision, machine learning, and physics-based modeling to discuss the latest advancements in video generation and restoration. The workshop will emphasize the importance of integrating physical constraints and real-world priors into generative video models to ensure realism, consistency, and applicability in diverse domains.

Physics-aware video generation and restoration are fundamental for applications where realistic motion, temporal consistency, and adherence to physical laws are critical, including:

  • Autonomous driving: Accurate motion forecasting and scene reconstruction
  • Medical imaging: High-fidelity generation/restoration for better diagnostics
  • Scientific simulations: Data-driven video generation for physics-based modeling
  • Streaming, AR/VR, and gaming: Real-time video enhancement for immersive experiences
  • Surveillance and forensics: Reconstruction of occluded or degraded video
  • Infant/toddler monitoring: Detecting and restoring subtle movements in low-quality recordings
  • Elderly care and assistive technology: Enhancing visibility and understanding of movement patterns

Key Topics

We invite submissions on the following topics (but not limited to):

  • Physics-aware video generation: Integrating physical laws, fluid dynamics, and rigid-body motion into generative models
  • Temporal consistency in video generation: Addressing flickering, motion coherence, and long-term dynamics
  • Video super-resolution and enhancement: Transforming low-quality video (SD-to-HD, SDR-to-HDR) while preserving physical realism
  • Generative AI for video restoration: Applications of diffusion models, GANs, and transformers
  • Synthetic data generation for video restoration: Using physics-based rendering and generative models
  • Domain adaptation and cross-modal learning: Leveraging multi-modal information for robust video synthesis
  • Efficient video generation and restoration: Optimizing for real-time applications and edge computing
  • Self-supervised and unsupervised learning for video restoration: Training with minimal labeled data
  • Video artifact removal and enhancement: Denoising, deblurring, inpainting, and artifact removal
  • Benchmarking and evaluation metrics: Novel full-reference, reduced-reference, and no-reference metrics
  • Application-driven video enhancement: For medical, autonomous driving, surveillance, and creative industries
  • Ethical considerations and societal impact: Addressing biases, misinformation risks, and responsible AI use

Workshop Schedule

TBD

Call for Papers

We invite original submissions to the PAVER Workshop at ICPR 2026.

Proceedings

Accepted full papers will be published in the official ICPR 2026 Workshop Proceedings in the Lecture Notes in Computer Science (LNCS) series. Short papers will be presented at the workshop but will not appear in the LNCS proceedings.

Submission Portal

Important Dates

Submission Deadline: May 01, 2026
Author Notification: June 10, 2026
Camera-Ready Deadline: June 20, 2026
Workshop Date: August 22, 2026

Submission Guidelines

  • Submissions must follow the LNCS formatting guidelines provided on the ICPR 2026 website.
  • Papers must not exceed 15 pages, including references.
  • If a paper exceeds 15 pages, an additional fee of €150 per extra page will apply.
  • Only full papers (more than 6 pages) will be published in the LNCS proceedings.
  • Short papers (6 pages or fewer) will be presented at the workshop but will not appear in the proceedings.
  • All submissions must be original and not under review elsewhere.
  • Papers will undergo peer review by the program committee.

Organizers

Dr. Yarong Feng

Senior Applied Scientist
Amazon

Ph.D. in Statistics, expertise in probability theory, computer vision, and multi-modal learning

Prof. Sarah Ostadabbas

Associate Professor
Northeastern University

Director of Augmented Cognition Lab and Women in Engineering, NSF CAREER Award recipient

Dr. Qipin Chen

Senior Applied Scientist
Amazon

Ph.D. in Computational Mathematics, specializing in computer vision and multi-modal learning

Dr. Zongyi (Joe) Liu

Principal Computer Vision Scientist
Amazon

Ph.D. in Computer Science, 15+ years of industrial research experience

Dr. Agata Lapedriza

Principal Research Scientist
Northeastern University

Specializing in human-centric AI, affective computing, and responsible AI

Dr. Hai Wei

Principal Video Specialist
Prime Video, Amazon

Ph.D. in Computer Science, 16+ years of experience in video compression and quality analysis

Dr. Zicheng Liu

Senior Director of GenAI
AMD

IEEE Fellow, former Editor-in-Chief of JVCIR, expert in foundation models and computer vision

Dr. Minmin Shen

Senior Applied Scientist
Amazon

Ph.D. in Computer Vision, expertise in computer vision, NLP and multi-modal learning

Contact

For any questions or inquiries, please contact us at:

Email: icpr2026-paver-workshop@amazon.com