Project 1
Team Member: Danny (Iou-Sheng) Chang, Juo-Tung Chen
- Build a Deep Convolutional Neural Network, train it on CIFAR-10 training set and test it on CIFAR-10 testing set.
- Attack the DCNN using three perturbations of our choice and show how the performance is affected.
- Provide a defense for one of the noise models and show the performance improvement.
The CIFAR-10 Dataset consists of 60000 32×32 color images of 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images. The dataset is divided into five training batches and one test batch, each with 10000 images. The test batch contains exactly 1000 randomly selected images from each class. The training batches contain the remaining images in random order, but some training batches may contain more images from one class than another. Between them, the training batches in total contain exactly 5000 images from each class.
Reference: Learning Multiple Layers of Features from Tiny Images, Alex Krizhevsky, 2009, https://www.cs.toronto.edu/~kriz/cifar.html.
GoogLeNet
Reference: C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. arXiv preprint arXiv:1409.4842v1, 2014.
- Fast Gradient Sign Method (FGSM)
- Noise Attack
- Semantic Attack
The team selected Defensive Distillation as the defense method against FGSM attack.
Defensive distillation is a defense method to adversarial perturbations against DCNN. The method was introduced by Papernot et al. in Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks.
Reference: Nicolas Papernot, Patrick McDaniel, Xi Wu, Somesh Jha, Ananthram Swami. Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks. arXiv preprint arXiv:1511.04508v2, 2016.