Skip to content

[CVPR 2022] OCSampler: Compressing Videos to One Clip with Single-step Sampling

Notifications You must be signed in to change notification settings

interaction-lab/OCSampler

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

OCSampler

This repo is the implementation of OCSampler: Compressing Videos to One Clip with Single-step Sampling. (CVPR 2022)

Dependencies

  • GPU: TITAN Xp
  • GCC: 5.4.0
  • Python: 3.6.13
  • PyTorch: 1.5.1+cu102
  • TorchVision: 0.6.1+cu102
  • MMCV: 1.5.3
  • MMAction2: 0.12.0

Installation:

a. Create a conda virtual environment and activate it.

conda create -n open-mmlab python=3.6.13 -y
conda activate open-mmlab

b. Install PyTorch and TorchVision following the official instructions, e.g.,

conda install pytorch==1.5.1 torchvision==0.6.1 cudatoolkit=10.2 -c pytorch

Note: Make sure that your compilation CUDA version and runtime CUDA version match. You can check the supported CUDA version for precompiled packages on the PyTorch website.

c. Install MMCV.

pip install mmcv

d. Clone the OCSampler repository.

git clone https://github.com/MCG-NJU/OCSampler

e. Install build requirements and then install MMAction2.

pip install -r requirements/build.txt
pip install -v -e .  # or "python setup.py develop"

Data Preparation:

Please refer to the default MMAction2 dataset setup to set datasets correctly.

Specially, for ActivityNet dataset, we adopt the training annotation file with one label, since there are only 6 out of 10024 videos with more than one labels and these labels are similar. Owing to the different label mapping between MMAction2 and FrameExit in ActivityNet, we provide two kinds of annotation files. You can check it in data/ActivityNet/ and configs/activitynet_*.py.

For Mini-Kinetics, please download Kinetics 400 and use the train/val splits file from AR-Net

Pretrained Models:

The pretrained models are provided in Google Drive

Training

Here we take training the OCSampler in ActivityNet dataset for example.

# bash tools/dist_train.sh {CONFIG_FILE} {GPUS} {--validate}
bash tools/dist_train.sh configs/activitynet_10to6_resnet50.py 8 --validate

Note that we directly port the weights of classification models provided from FrameExit.

Inference

Here we take evaluating the OCSampler in ActivityNet dataset for example.

# bash tools/dist_test.sh {CONFIG_FILE} {CHECKPOINT} {GPUS} {--eval mean_average_precision / top_k_accuracy}
bash tools/dist_test.sh configs/activitynet_10to6_resnet50.py modelzoo/anet_10to6_checkpoint.pth 8 --eval mean_average_precision

If you want to directly evaluating the OCSampler on other classifier, you can add again_load param in config file like this.

bash tools/dist_test.sh configs/activitynet_slowonly_inference_with_ocsampler.py modelzoo/anet_10to6_checkpoint.pth 8 --eval mean_average_precision

Citation

If you find OCSampler useful in your research, please cite us using the following entry:

@inproceedings{lin2022ocsampler,
  title={OCSampler: Compressing Videos to One Clip with Single-step Sampling},
  author={Lin, Jintao and Duan, Haodong and Chen, Kai and Lin, Dahua and Wang, Limin},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={13894--13903},
  year={2022}
}

Acknowledge

In addition to the MMAction2 codebase, this repo contains modified codes from:

  • FrameExit: for implementation of its classifier.

About

[CVPR 2022] OCSampler: Compressing Videos to One Clip with Single-step Sampling

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 95.9%
  • Shell 3.8%
  • Other 0.3%