By Peng Tang, Xinggang Wang, Zilong Huang, Xiang Bai, and Wenyu Liu.
Deep Patch Learning (DPL) is a fast framework for object classification and discovery with deep ConvNets.
- It achieves state-of-the-art performance on object classification (Pascal VOC 2007 and 2012), and very competitive results on object discovery.
- Our code is written by C++ and Python, based on Caffe and fast r-cnn.
The paper has been accepted by Pattern Recognition. For more details, please refer to our paper (also available at arxiv).
If you are focusing on weakly supervised object detection (or object discovery), you can also see our recent CVPR2017 work OICR.
VOC2007 test mAP (classification) | VOC2007 trainval CorLoc (discovery) | VOC2012 test mAP (classification) | VOC2012 trainval CorLoc (discovery) | |
---|---|---|---|---|
DPL-AlexNet | 85.3 | 43.5 | 84.4 | 48.7 |
DPL-VGG16 | 92.7 | 45.4 | 92.5 | 51.0 |
DPL is released under the MIT License (refer to the LICENSE file for details).
If you find DPL useful in your research, please consider citing:
@article{tang2017deep,
author = {Tang, Peng and Wang, Xinggang and Huang, Zilong and Bai, Xiang and Liu, Wenyu},
title = {Deep Patch Learning for Weakly Supervised Object Classification and Discovery},
journal = {Pattern Recognition},
volume = {},
pages = {},
year = {2017}
}
- Requirements: software
- Requirements: hardware
- Basic installation
- Installation for training and testing
- Extra Downloads (selective search)
- Extra Downloads (ImageNet models)
- Usage
- Trained models
- Requirements for
Caffe
andpycaffe
(see: Caffe installation instructions)
Note: Caffe must be built with support for Python layers!
# In your Makefile.config, make sure to have this line uncommented
WITH_PYTHON_LAYER := 1
- Python packages you might not have:
cython
,python-opencv
,easydict
- MATLAB
- NVIDIA GTX TITANX (~12G of memory)
- Clone the DPL repository
# Make sure to clone with --recursive
git clone --recursive https://github.com/ppengtang/dpl.git
-
Build the Cython modules
cd $DPL_ROOT/lib make
-
Build Caffe and pycaffe
cd $DPL_ROOT/caffe-dpl # Now follow the Caffe installation instructions here: # http://caffe.berkeleyvision.org/installation.html # If you're experienced with Caffe and have all of the requirements installed # and your Makefile.config in place, then simply do: make all -j 8 && make pycaffe
-
Download the training, validation, test data and VOCdevkit
wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtrainval_06-Nov-2007.tar wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtest_06-Nov-2007.tar wget http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCdevkit_18-May-2011.tar
-
Extract all of these tars into one directory named
VOCdevkit
tar xvf VOCtrainval_06-Nov-2007.tar tar xvf VOCtest_06-Nov-2007.tar tar xvf VOCdevkit_18-May-2011.tar
-
It should have this basic structure
$VOCdevkit/ # development kit $VOCdevkit/VOCcode/ # VOC utility code $VOCdevkit/VOC2007 # image sets, annotations, etc. # ... and several other directories ...
-
Create symlinks for the PASCAL VOC dataset
cd $DPL_ROOT/data ln -s $VOCdevkit VOCdevkit2007
Using symlinks is a good idea because you will likely want to share the same PASCAL dataset installation between multiple projects.
-
[Optional] follow similar steps to get PASCAL VOC 2012.
-
You should put the generated proposal data under the folder $DPL_ROOT/data/selective_search_data, with the name "voc_2007_trainval.mat", "voc_2007_test.mat", just as the form of fast-rcnn.
-
The pre-trained models are all available in the Caffe Model Zoo. You should put it under the folder $DPL_ROOT/data/imagenet_models, just as the form of fast-rcnn.
Pre-computed selective search boxes can also be downloaded for VOC2007 and VOC2012.
cd $DPL_ROOT
./data/scripts/fetch_selective_search_data.sh
This will populate the $DPL_ROOT/data
folder with selective_selective_data
.
(The script is copied from the fast-rcnn).
Pre-trained ImageNet models can be downloaded.
cd $DPL_ROOT
./data/scripts/fetch_imagenet_models.sh
These models are all available in the Caffe Model Zoo, but are provided here for your convenience. (The script is copied from the fast-rcnn).
Train a DPL network. For example, train a VGG16 network on VOC 2007 trainval:
./tools/train_net.py --gpu 1 --solver models/VGG16/solver.prototxt \
--weights data/imagenet_models/$VGG16_model_name --iters 40000
Test a DPL network. For example, test the VGG 16 network on VOC 2007 test:
./tools/test_net.py --gpu 1 --def models/VGG16/test_cls.prototxt \
--net output/default/voc_2007_trainval/vgg16_dpl_iter_40000.caffemodel
./tools/test_net.py --gpu 1 --def models/VGG16/test_det.prototxt \
--net output/default/voc_2007_trainval/vgg16_dpl_iter_40000.caffemodel \
--imdb voc_2007_trainval --task det
Test output is written underneath $DPL_ROOT/output
.
To get results, put the results under the folder $VOCdevkit/results/VOC2007/Main
.
For classification, run the matlab code eval_classification.m
For discovery, run the matlab code eval_discovery.m
The models trained on PASCAL VOC 2007 can be downloaded from here.
And on PASCAL VOC 2012 can be downloaded from here.