This repository integrates various spike-based image reconstruction methods. It aims to assist in comparing and visualizing previous approaches on the standard REDS dataset, Real-world spike dataset, or a single spike sequence.
- My goal is to develop a robust spike-based image reconstruction framework similar to nerfstudio. If you're passionate about contributing to this project, feel free to contact me through [email protected] 😊🎉🎉🎉
-
24-08-26: We update the
SpikeFormer
[7] andRSIR
[8] methods, theUHSR
[9] dataset and thepiqe
non-reference metric. -
24-07-19: We release the Spike-Zoo base code.
- Support more spike-based image reconstruction methods. (CVPR24 Zhao et al., TCSVT23 Zhao et al.)
- Support more datasets. (CVPR24 Zhao et al., TCSVT23 Zhao et al.)
- Support more metrics. (More non-reference metrics.)
- Support more evaluation tools. (Evaluate the non-reference metrics of data.dat.)
In this repository, we currently support the following methods: TFP
[1], TFI
[1], TFSTP
[2], Spk2ImgNet
[3], SSML
[4], and WGSE
[5], which take 41 spike frames as the input and reconstruct one sharp image. We also support SpikeFormer
[7] (65 spike frames input) and RSIR
[8] designed for low-light condition (160 spike frames input).
We currently provide common paired computational metrics for the REDS dataset, including PSNR
, SSIM
, and LPIPS
, as well as no-reference metrics such as NIQE
, PIQE
, BRISQUE
, LIQE_MIX
, and CLIPIQA
. For the real-world spike dataset, we only offer no-reference metrics.
Our python enviroment is 3.9.19
, run the following code to setup the packages for the enviroment.
git clone [email protected]:chenkang455/Spike-Zoo.git
cd Spike-Zoo
pip install -r requirements.txt
Most methods in this repository are trained using the REDS
dataset[3]. The train and test parts can be downloaded using the provided links.
The real-world
spike dataset[6] recVidarReal2019
is available for download here.
The UHSR
real-world spike dataset [9] with class label is available for download here.
After downloading, please put them under the Data
folder and rename the train and test parts of the REDS dataset to train
and test
folders, respectively. The project should then be organized as follows:
<project root>
├── compare_zoo
├── Data
│ ├── REDS
│ │ ├── train
│ │ │ ├── gt
│ │ │ └── spike
│ │ └── test
│ │ ├── gt
│ │ └── spike
│ ├── recVidarReal2019
│ ├── U-CALTECH
│ ├── U-CIFAR
│ └── data.dat
└── compare.py
This repository offers three usages:
-
Quantitatively measure the performance metrics of various methods on the datasets.
-
Quantitatively measure the parameter sizes, FLOPS, and latency of various methods.
-
Visualize image reconstruction results directly from the given input spike sequence like
Data/data.dat
.
We use three command-line arguments, test_params
, test_metric
, and test_imgs
, to specify the usage mode.
methods:
Specifies methods to measure. Default 'Spk2ImgNet,WGSE,SSML,TFP,TFI,TFSTP,RSIR,SpikeFormer'.
metrics:
Specifies metrics to measure. Default 'psnr, ssim, lpips, niqe, brisque, liqe_mix, clipiqa'.
save_name:
Specifies the log file location for input. Default `logs/result.log'.
spike_path:
Specifies the location of the spike sequence for visualization. Default 'Data/data.dat'.
cls:
Specifies the input dataset name, i.e., REDS, Real-world dataset, UHSR or a spike sequence. Default 'spike'.
❗: The execution of
TFSTP
is notably slow. Feel free to omit this method from testing if you prefer to accelerate the process.
CUDA_VISIBLE_DEVICES=0 python compare.py \
--test_params \
--save_name logs/params.log \
--methods Spk2ImgNet,WGSE,SSML,TFP,TFI,TFSTP,RSIR,SpikeFormer
We provide the pre-calculated result in logs/params.log. Please note that 'Latency' may vary slightly with each run. Our results are measured on a single NVIDIA RTX 4090 GPU.
CUDA_VISIBLE_DEVICES=0 python compare.py \
--test_metric \
--save_name logs/reds_metric.log \
--methods Spk2ImgNet,WGSE,SSML,TFP,TFI,TFSTP,RSIR,SpikeFormer \
--cls REDS \
--metrics psnr,ssim,lpips,niqe,brisque,liqe_mix,clipiqa
We provide the pre-calculated result in logs/reds_metric.log.
CUDA_VISIBLE_DEVICES=0 python compare.py \
--test_metric \
--save_name logs/real_metric.log \
--methods Spk2ImgNet,WGSE,SSML,TFP,TFI,TFSTP,RSIR,SpikeFormer \
--cls Real \
--metrics niqe,brisque,liqe_mix,clipiqa
We provide the pre-calculated result in logs/real_metric.log.
CUDA_VISIBLE_DEVICES=1 python compare.py \
--test_imgs \
--methods Spk2ImgNet,WGSE,SSML,TFP,TFI,TFSTP,RSIR,SpikeFormer \
--cls spike \
--spike_path Data/car-100kmh.dat\
We offer a quantitative comparison table for our new work SPCS-Net
with previous methods.
⭐ SPCS-Net
is a more rapid, more efficient, more lightweight network for spike-based image reconstruction!
Should you have any questions, please feel free to contact [email protected].
Implementations of TFP, TFI and TFSTP are from the SpikeCV. Other methods are implemented according to the paper official repository. Implementations of non-reference metrics are from the IQA-Pytorch.We appreciate the effort of the contributors to these repositories.
[1] Zhu, Lin, et al. "A retina-inspired sampling method for visual texture reconstruction." 2019 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2019.
[2] Zheng, Yajing, et al. "Capture the moment: High-speed imaging with spiking cameras through short-term plasticity." IEEE Transactions on Pattern Analysis and Machine Intelligence 45.7 (2023): 8127-8142.
[3] Zhao, Jing, et al. "Spk2imgnet: Learning to reconstruct dynamic scene from continuous spike stream." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021.
[4] Chen, Shiyan, et al. "Self-Supervised Mutual Learning for Dynamic Scene Reconstruction of Spiking Camera." IJCAI. 2022.
[5] Zhang, Jiyuan, et al. "Learning temporal-ordered representation for spike streams based on discrete wavelet transforms." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 37. No. 1. 2023.
[6] Zhu, Lin, et al. "Retina-like visual image reconstruction via spiking neural model." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020.
[7] She, Chen, and Laiyun Qing. "SpikeFormer: Image Reconstruction from the Sequence of Spike Camera Based on Transformer." Proceedings of the 2022 5th International Conference on Image and Graphics Processing. 2022.
[8] Zhu, Lin, et al. "Recurrent spike-based image restoration under general illumination." Proceedings of the 31st ACM International Conference on Multimedia. 2023.
[9] Zhao J, Zhang S*, Yu Z*, and Huang T. "Recognizing Ultra-High-Speed Moving Objects with Bio-Inspired Spike Camera", in Proceedings of AAAI Conference on Artificial Intelligence. 2024.