Model Compression Papers!!
12. Adversarial Robustness vs. Model Compression, or Both?
30. HAWQ: Hessian AWare Quantization of Neural Networks with Mixed-Precision
Tuesday, October 29, 2019, 1330–1530 Oral 1.2A (Hall D1) Judy Hoffman (Facebook AI Research; Georgia Tech), Min Sun (National Tsing Hua Univ.)
4. 13:48 Searching for MobileNetV3
5. 13:53 Data-Free Quantization Through Weight Equalization and Bias Correction
7. 14:06 Knowledge Distillation via Route Constrained Optimization
9. 14:16 Similarity-Preserving Knowledge Distillation
53. 15:30 Universally Slimmable Networks and Improved Training Techniques
39. 10:30 Accelerate CNN via Recursive Bayesian Pruning
37 10:30 On the Efficacy of Knowledge Distillation
43 10:30 Differentiable Soft Quantization: Bridging Full-Precision and Low-Bit Neural Networks
45 10:30 Proximal Mean-Field for Neural Network Quantization
49 10:30 Bayesian Optimized 1-Bit CNNs
Sat 2 Nov
Half Day AM E5 Accelerating Computer Vision with Mixed Precision Ming-Yu Liu(NVIDIA)
Mon 28 Oct
Full day 308BC Neural Architects Samuel Albanie
https://arxiv.org/pdf/1907.13268.pdf
https://arxiv.org/abs/1904.02689
[WorkShops]
Mon 28 Oct(Half day AM) 307BC Lightweight Face Recognition Challenge Jiankang Deng
Sat 02 Nov(Full day) 301 Deep Learning for Visual SLAM