This repository contains code to detect (track) landmark.
Landmark detection usually be used on facial tracking. For this project, we use the same concept to detect landmarks that we are interested in to calculate the object pose.
-
Download the model weights checkpoint.pth and place the file in to
/logs
. -
Install following libary
ffmpeg-python>=0.2.0
matplotlib>=3.0.2
munkres>=1.1.2
numpy>=1.16
opencv-python>=3.4
Pillow>=5.4
vidgear>=0.1.4
torch>=1.2.0
torchvision>=0.4.0
tqdm>=4.26
tensorboard>=1.11
tensorboardX>=1.4
-
Download dataset (if you need to train) and place the file in to
/datasets/monitor
. -
Get YOLOv3:
-
Clone YOLOv3 in the folder
./models/detectors
and change the folder name fromPyTorch-YOLOv3
toyolo
-
Install YOLOv3 required packages
pip install -r requirements.txt
(from folder./models/detectors/yolo
) -
Download the pre-trained weights running the script
download_weights.sh
from theweights
folder
-
python scripts/live-demo.py --camera_id 0
python scripts/train_coco.py
For help:
python scripts/train_coco.py --help
- Your folders should look like:
simple-HRNet ├── datasets (datasets - for training only) │ └── COCO (COCO dataset) ├── losses (loss functions) ├── misc (misc) │ └── nms (CUDA nms module - for training only) ├── models (pytorch models) │ └── detectors (people detectors) │ └── yolo (PyTorch-YOLOv3 repository) │ ├── ... │ └── weights (YOLOv3 weights) ├── scripts (scripts) ├── testing (testing code) ├── training (training code) └── weights (HRnet weights)
If you want to run the training script on COCO scripts/train_coco.py
, you have to build the nms
module first.
Please note that a linux machine with CUDA is currently required.
Built it with either:
cd misc; make
orcd misc/nms; python setup_linux.py build_ext --inplace
Dataset too leak(only 266 image) need help to label. Can use labelme to label and use tool to do pre process.
Our code builds upon SimpleHRNet