This repository contains the code to reproduce experiments in the paper "A Rainbow in Deep Network Black Boxes".
Our code is designed to run on GPU using PyTorch framework. In order to run our experiments you will need the following packages: numpy
, scipy
, torch
, torchvision
, tqdm
.
The ImageNet dataset must be downloaded from http://www.image-net.org/challenges/LSVRC/2012/downloads. Then move validation images to labeled subfolders, using the PyTorch shell script.
CIFAR-10 is automatically installed by Pytorch.
We provide a self-contained tutorial (see tutorial.ipynb
, which uses tutorial_lib.py
) to reproduce some of the results in the paper in a simplified setting. We recommend starting there to become more familiar with the concepts in the paper. This tutorial was given at the CCN 2023 conference, see the complete tutorial website.
All figures in the paper can be reproduced using paper_figs.ipynb
.
However, all figures expect to find experiment folders with trained model checkpoints. These models can be trained by running main_block.py
with appropriate arguments.
The commands that were used to train all models can be generated by running python run_exps.py --print
. Omitting the --print
argument will cause the script to execute the commands (sequentially) instead of running them. Subsets of experiments can thus be ran by commenting or uncommenting lines in run_exps.py
. Line 8 of this file should be modified to provide the path to the ImageNet dataset.
Finally, paper_figs.ipynb
also relies on some precomputed data in the notebook_data
folder (classification accuracy after clipping and activation covariances of ImageNet models). For convenience, we provide this pre-computed data (activation covariances must be downloaded from this link). Code to regenerate the clipping accuracy results can be found in paper_figs.ipynb
.