By Peng Tang, Xinggang Wang, Xiang Bai, and Wenyu Liu.
The code to train and eval OICR using PyTorch as backend is available here. Thanks Vadim!
We have released the caffe codes of our PCL work here and PyTorch codes here. The PCL is the extension of OICR and obtains better performance than OICR!
Online Instance Classifier Refinement (OICR) is a framework for weakly supervised object detection with deep ConvNets.
- It achieves state-of-the-art performance on weakly supervised object detection (Pascal VOC 2007 and 2012).
- Our code is written by C++ and Python, based on Caffe, fast r-cnn, and faster r-cnn.
The paper has been accepted by CVPR 2017. For more details, please refer to our paper.
Method | VOC2007 test mAP | VOC2007 trainval CorLoc | VOC2012 test mAP | VOC2012 trainval CorLoc |
---|---|---|---|---|
OICR-VGG_M | 37.9 | 57.3 | 34.6 | 60.7 |
OICR-VGG16 | 41.2 | 60.6 | 37.9 | 62.1 |
OICR-Ens. | 42.0 | 61.2 | 38.2 | 63.5 |
OICR-Ens.+FRCNN | 47.0 | 64.3 | 42.5 | 65.6 |
OICR is released under the MIT License (refer to the LICENSE file for details).
If you find OICR useful in your research, please consider citing:
@inproceedings{tang2017multiple,
Author = {Tang, Peng and Wang, Xinggang and Bai, Xiang and Liu, Wenyu},
Title = {Multiple Instance Detection Network with Online Instance Classifier Refinement},
Booktitle = {CVPR},
Year = {2017}
}
- Requirements: software
- Requirements: hardware
- Basic installation
- Installation for training and testing
- Extra Downloads (selective search)
- Extra Downloads (ImageNet models)
- Usage
- Requirements for
Caffe
andpycaffe
(see: Caffe installation instructions)
Note: Caffe must be built with support for Python layers!
# In your Makefile.config, make sure to have this line uncommented
WITH_PYTHON_LAYER := 1
- Python packages you might not have:
cython
,python-opencv
,easydict
- MATLAB
- NVIDIA GTX TITANX (~12G of memory)
- Clone the OICR repository
# Make sure to clone with --recursive
git clone --recursive https://github.com/ppengtang/oicr.git
- Build the Cython modules
cd $OICR_ROOT/lib
make
- Build Caffe and pycaffe
cd $OICR_ROOT/caffe-oicr
# Now follow the Caffe installation instructions here:
# http://caffe.berkeleyvision.org/installation.html
# If you're experienced with Caffe and have all of the requirements installed
# and your Makefile.config in place, then simply do:
make all -j 8
make pycaffe
- Download the training, validation, test data and VOCdevkit
wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtrainval_06-Nov-2007.tar
wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtest_06-Nov-2007.tar
wget http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCdevkit_18-May-2011.tar
- Extract all of these tars into one directory named
VOCdevkit
tar xvf VOCtrainval_06-Nov-2007.tar
tar xvf VOCtest_06-Nov-2007.tar
tar xvf VOCdevkit_18-May-2011.tar
- It should have this basic structure
$VOCdevkit/ # development kit
$VOCdevkit/VOCcode/ # VOC utility code
$VOCdevkit/VOC2007 # image sets, annotations, etc.
# ... and several other directories ...
- Create symlinks for the PASCAL VOC dataset
cd $OICR_ROOT/data
ln -s $VOCdevkit VOCdevkit2007
Using symlinks is a good idea because you will likely want to share the same PASCAL dataset installation between multiple projects.
-
[Optional] follow similar steps to get PASCAL VOC 2012.
-
You should put the generated proposal data under the folder $OICR_ROOT/data/selective_search_data, with the name "voc_2007_trainval.mat", "voc_2007_test.mat", just as the form of fast-rcnn.
-
The pre-trained models are all available in the Caffe Model Zoo. You should put it under the folder $OICR_ROOT/data/imagenet_models, just as the form of fast-rcnn.
Pre-computed selective search boxes can also be downloaded for VOC2007 and VOC2012.
cd $OICR_ROOT
./data/scripts/fetch_selective_search_data.sh
This will populate the $OICR_ROOT/data
folder with selective_selective_data
.
(The script is copied from the fast-rcnn).
Pre-trained ImageNet models can be downloaded.
cd $OICR_ROOT
./data/scripts/fetch_imagenet_models.sh
These models are all available in the Caffe Model Zoo, but are provided here for your convenience. (The script is copied from the fast-rcnn).
Train a OICR_ROOT network. For example, train a VGG16 network on VOC 2007 trainval:
./tools/train_net.py --gpu 1 --solver models/VGG16/solver.prototxt \
--weights data/imagenet_models/$VGG16_model_name --iters 70000
Test a OICR network. For example, test the VGG 16 network on VOC 2007 test:
./tools/test_net.py --gpu 1 --def models/VGG16/test.prototxt \
--net output/default/voc_2007_trainval/vgg16_oicr_iter_70000.caffemodel \
--imdb voc_2007_trainval
./tools/test_net.py --gpu 1 --def models/VGG16/test.prototxt \
--net output/default/voc_2007_trainval/vgg16_oicr_iter_70000.caffemodel \
--imdb voc_2007_test
Test output is written underneath $OICR_ROOT/output
.
For mAP, run the python code tools/reval.py
./tools/reval.py $output_dir --imdb voc_2007_test --matlab
For CorLoc, run the python code tools/reval_discovery.py
./tools/reval_discovery.py $output_dir --imdb voc_2007_trainval
The codes for training fast rcnn by pseudo ground truths are available on here.