OpenGait is a flexible and extensible gait analysis project provided by the Shiqi Yu Group and supported in part by WATRIX.AI. The corresponding paper has been accepted by CVPR2023 as a highlight paper.
- [Dec 2024] The multimodal MultiGait++ has been accepted to AAAI2025🎉 Congratulations to Dongyang! This is his FIRST paper!
- [Jun 2024] The first large-scale gait-based scoliosis screening benchmark ScoNet is accepted to MICCAI2024🎉 Congratulations to Zirui! This is his FIRST paper! The code is released here, and you can refer to project homepage for details.
- [May 2024] The code of Large Vision Model based method BigGait is available at here. CCPG's checkpoints.
- [Apr 2024] Our team's latest checkpoints for projects such as DeepGaitv2, SkeletonGait, SkeletonGait++, and SwinGait will be released on Hugging Face. Additionally, previously released checkpoints will also be gradually made available on it.
- [Mar 2024] Chao gives a talk about 'Progress in Gait Recognition'. The video and slides are both available😊
- [Mar 2024] The code of SkeletonGait++ is released here, and you can refer to readme for details.
- [Mar 2024] BigGait has been accepted to CVPR2024🎉 Congratulations to Dingqiang! This is his FIRST paper!
- [Jan 2024] The code of transfomer-based SwinGait is available at here.
- [TBIOM'24] A Comprehensive Survey on Deep Gait Recognition: Algorithms, Datasets, and Challenges, Survey Paper.
- [AAAI'25] Exploring More from Multiple Gait Modalities for Human Identification, Paper and MultiGait++ Code (Coming soon).
- [MICCAI'24] Gait Patterns as Biomarkers: A Video-Based Approach for Classifying Scoliosis, Paper, Dataset, and ScoNet Code.
- [CVPR'24] BigGait: Learning Gait Representation You Want by Large Vision Models. Paper, and BigGait Code.
- [AAAI'24] SkeletonGait++: Gait Recognition Using Skeleton Maps. Paper, and SkeletonGait++ Code.
- [AAAI'24] Cross-Covariate Gait Recognition: A Benchmark. Paper, CCGR Dataset, and ParsingGait Code.
- [Arxiv'23] Exploring Deep Models for Practical Gait Recognition. Paper, DeepGaitV2 Code, and SwinGait Code.
- [PAMI'23] Learning Gait Representation from Massive Unlabelled Walking Videos: A Benchmark, Paper, GaitLU-1M Dataset, and GaitSSB Code.
- [CVPR'23] LidarGait: Benchmarking 3D Gait Recognition with Point Clouds, Paper, SUSTech1K Dataset and LidarGait Code.
- [CVPR'23] OpenGait: Revisiting Gait Recognition Toward Better Practicality, Highlight Paper, and GaitBase Code.
- [ECCV'22] GaitEdge: Beyond Plain End-to-end Gait Recognition for Better Practicality, Paper, and GaitEdge Code.
The workflow of All-in-One-Gait involves the processes of pedestrian tracking, segmentation and recognition. See here for details.
- Multiple Dataset supported: CASIA-B, OUMVLP, SUSTech1K, HID, GREW, Gait3D, CCPG, CASIA-E, and GaitLU-1M.
- Multiple Models Support: We reproduced several SOTA methods and reached the same or even better performance.
- DDP Support: The officially recommended
Distributed Data Parallel (DDP)
mode is used during both the training and testing phases. - AMP Support: The
Auto Mixed Precision (AMP)
option is available. - Nice log: We use
tensorboard
andlogging
to log everything, which looks pretty.
Please see 0.get_started.md. We also provide the following tutorials for your reference:
✨✨✨You can find all the checkpoint files at ✨✨✨!
The result list of appearance-based gait recognition is available here.
The result list of pose-based gait recognition is available here.
- Chao Fan (樊超), [email protected]
- Chuanfu Shen (沈川福), [email protected]
- Junhao Liang (梁峻豪), [email protected]
Now OpenGait is mainly maintained by Dongyang Jin (金冬阳), [email protected]
-
GLN: Saihui Hou (侯赛辉)
-
GaitGL: Beibei Lin (林贝贝)
-
GREW: GREW TEAM
-
FastPoseGait Team: FastPoseGait Team
-
Gait3D Team: Gait3D Team
@InProceedings{Fan_2023_CVPR,
author = {Fan, Chao and Liang, Junhao and Shen, Chuanfu and Hou, Saihui and Huang, Yongzhen and Yu, Shiqi},
title = {OpenGait: Revisiting Gait Recognition Towards Better Practicality},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2023},
pages = {9707-9716}
}
Note: This code is only used for academic purposes, people cannot use this code for anything that might be considered commercial use.