Skip to content

Official PyTorch implementation of "Framer: Interactive Frame Interpolation".

Notifications You must be signed in to change notification settings

aim-uofa/Framer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

17 Commits
 
 
 
 

Repository files navigation

🔆 TL;DR

We propose Framer, a more controllable and interactive frame interpolation method that allows users to produce smoothly transitioning frames between two images by customizing the trajectory of selected keypoints, enhancing control and handling challenging cases.

Main Claims

The proposed method, Framer, provides interactive frame interpolation, allowing users to customize transitions by tailoring the trajectories of selected keypoints. This approach mitigates the ambiguity of image transformation, enabling much finer control of local motions and improving the model's ability to handle challenging cases (e.g., objects with differing shapes and styles). Framer also includes an "autopilot" mode that automatically estimates keypoints and refines trajectories, simplifying the process and enabling motion-natural and temporally coherent results.

Methodology

This work utilizes a large-scale pre-trained image-to-video diffusion model (Stable Video Diffusion) as the base model. It introduces additional end-frame conditioning to facilitate video interpolation and incorporates a point trajectory controlling branch for user interaction.

Key Results

Framer outperforms existing frame interpolation methods in terms of visual quality and natural motion, particularly in cases involving complex motions and significant appearance changes. Quantitative evaluation using FVD (Fréchet Video Distance) demonstrates superior performance compared to other methods. User studies show a strong preference for Framer's output, highlighting its effectiveness in producing realistic and visually appealing results.

💡 Changelog

  • Release the code and checkpoints.
  • Oct. 28, 2024. Huggingface Gradio Demo is now available here!
  • Oct. 25, 2024. Launch the project page and upload the arXiv preprint.

Showcases

Note the videos are spatially compressed. We refer readers to the project page for the original videos.

1. Video Interpolation with User-Interaction

Start Image Input Trajectory & Interpolation Results End Image

2. Image Morphing with User-Interaction

Start Image Input Trajectory & Interpolation Results End Image

3. Video Interpolation without User-Input Control

Start Image Interpolation Results End Image

4. Novel View Synthesis

Start Image Interpolation Results End Image

5. Cartoon and Sketch Interpolation

Start Image Interpolation Results End Image

6. Time-lapsing Video Generation.

Start Image Interpolation Results End Image

📖 Citation BibTeX

Please consider citing our paper if our code is useful:

@article{wang2024framer,
  title={Framer: Interactive Video Interpolation},
  author={Wang, Wen and Wang, Qiuyu and Zheng, Kecheng and Ouyang, Hao and Chen, Zhekai and Gong, Biao and Chen, Hao and Shen, Yujun and Shen, Chunhua},
  journal={arXiv preprint https://arxiv.org/abs/2410.18978},
  year={2024}
}

🎫 License

For academic use, this project is licensed under the 2-clause BSD License. For commercial use, please contact C Shen.

😉 Acknowledgements

  • Thanks to SVD_Xtend for the wonderful work and codebase.
  • Thanks to DragAnything for the wonderful work and codebase.

About

Official PyTorch implementation of "Framer: Interactive Frame Interpolation".

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published