Skip to content

[TMM 2024] Motion Deblur by Learning Residual from Events

Notifications You must be signed in to change notification settings

chenkang455/TRMD

Repository files navigation

If you like our project, please give us a star ⭐ on GitHub.

Authors: Kang Chen and Lei Yu✉️ from Wuhan university, Wuhan, China.

IEEE License GitHub repo stars 

  • Our new work S-SDM designed for the spike-based motion deblurring task is accepted by the NeurIPS 2024 (Spotlight)🎉🎉🎉. Welcome check it by https://github.com/chenkang455/S-SDM 😊😊😊.

📕 Abstract

We propose a Two-stage Residual-based Motion Deblurring (TRMD) framework for an event camera, which converts a blurry image into a sequence of sharp images, leveraging the abundant motion features encoded in events. In the first stage, a residual estimation network is trained to estimate the residual sequence, which measures the intensity difference between the intermediate frame and other frames sampled during the exposure. In the subsequent stage, the previously estimated residuals are combined with the blurry image to reconstruct the deblurred sequence based on the physical model of motion blur.

👀 Visual Comparisons

GoPro dataset

gopro_table

REBlur dataset

reblur_table

🌏 Setup environment

git clone https://github.com/chenkang455/TRMD
cd TRMD
pip install -r requirements.txt

🕶 Download datasets

You can download our trained models, synthesized dataset GOPRO and real event dataset REBlur (from EFNet) from Baidu Netdisk with the password eluc.

Unzip the GOPRO.zip file before placing the downloaded models and datasets (path defined in config.yaml) according to the following directory structure:

├── Data                                                                                                                                                            
│   ├── GOPRO                                                                                              
│   │   └── train                                                                                                                             
│   │   └── test                                                                                    
|   ├── REBlur
|   |   └── train
|   |   └── test   
|   |   └── addition
|   |   └── README.md 
├── Pretrained_Model
│   ├── RE_Net.pth 
│   ├── RE_Net_rgb.pth 
├── config.yaml
├── ...

🍭 Configs

Change the data path and other parameters (if needed) in config.yaml.

🌅 Test with our pre-trained models

  • To test the metric and visualize the deblurred result on GRAY-GOPRO:
python test_GoPro.py --rgb False --load_unet True --load_path Pretrained_Model/RE_Net_GRAY.pth
  • To test the metric and visualize the deblurred result on RGB-GOPRO:
python test_GoPro.py --rgb True --load_unet True --load_path Pretrained_Model/RE_Net_RGB.pth
  • To visualize the deblurred result on REBlur:
python test_REBlur.py --load_unet True --load_path Pretrained_Model/RE_Net_GRAY.pth
  • To test our model size and FLOPs:
python network.py 

📊 Training

  • To train our model from scratch on GRAY-GOPRO:
python train_GoPro.py --rgb False --save_path Model/RE_Net_GRAY.pth
  • To train our model from scratch on RGB-GOPRO:
python train_GoPro.py --rgb True --save_path Model/RE_Net_RGB.pth

📞 Contact

Should you have any questions, please feel free to contact [email protected] or [email protected].

🤝 Citation

If you find our work useful in your research, please cite:

@article{chen2024motion,
  title={Motion Deblur by Learning Residual from Events},
  author={Chen, Kang and Yu, Lei},
  journal={IEEE Transactions on Multimedia},
  year={2024},
  publisher={IEEE} 
}

🙇‍ Acknowledgment

Our event representation (SCER) code and REBlur dataset are derived from EFNet. Some of the code for metric testing and module construction is from E-CIR. We appreciate the effort of the contributors to these repositories.

About

[TMM 2024] Motion Deblur by Learning Residual from Events

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages