Exploring ways to avoid the Vanishing Gradients effect in Deep Learning frameworks via Weights Initializers and Scaled Activation Functions
Exploring the well-known issue of "Vanishing Gradients" in Deep Learning and the influence the activation functions and the weights initialization have over it. This was done using different NN architectures (FFNN, CNN) with different sizes and weights' initialization functions, and two different datasets (mnist, cifar10).
Inspired by the paper "Regularization and Reparameterization Avoid Vanishing Gradients in Sigmoid-Type Networks" by Ven and Lederer [2021].
The goal of this project was to replicate the results of the paper by Ven and Lederer [2021] with slight models architectural variations, as an optional graduate project with a theoretical flavour. The focus is on analyzing how gradients of the models change by varying different experimental variables and/or setups (initialization functions, activation functions, depth of the network, etc.)
- VanishingGradients_report.pdf contains a final report of the work, with both an introduction to the problem, the goal of the project and the main results.
- Notebooks/ contains four notebooks, to account for
- 2 experiment setups (large initialization parameters, deep network)
- 2 different datasets for training (mnist, cifar10)
- environment.yml contains the libraries necessary to run the notebooks
Most of the experiments are the same. To have a clear view of the whole ordered pipeline (step-by-step), consult mnist_LargeParameters.ipynb: this notebook has been consistently annotated with explanations and explicit code. The other notebooks, to avoid verbosity, use the same functions but imported from /src/*.py source files, however less descriptive.
Project realized in TensorFlow Keras