This repository is forked from the original to make it run on my computer (M1 Mac CPU), while the original repo relies on Nvidia GPU.
- Replaced the base image with
tensorflow/tensorflow:2.15.0
inDockerfile.tensorflow
- Commented out fonts related lines in
Dockerfile.tensorflow
- Commented out gpu run args in
devcontainer.json
- Added joblib dependency to the Pipfile and created the corresponding Pipfile.lock
- Made matplotlib use "Agg" instead of "TkAgg" in files in the
idnns/plots/
directoryplot_figures.py
,plot_gradients.py
, andutils.py
. This means there is not longer any interactive element, but "Tk" was causing issues. - In Docker Desktop, in General Settings enabled "Use Rosetta for x86_64/amd64 emulation on Apple Silicon"
- You must have Docker installed beforehand. Make sure to go to Docker Desktop, navigate to General Settings, and enable "Use Rosetta for x86_64/amd64 emulation on Apple Silicon". This might bring up a prompt that says you need a system update that then freezes when trying to find the update. I ignored the frozen window, and Rosetta was still enabled for me, hopefully it will work for you too!
- Open the project in VSCode
- Make sure the Dev Containers extension is installed (VSCode should automatically prompt you to install when it notices the devcontainer files).
- VSCode should automatically prompt you to "Reopen in Container" with a popup on the bottom right. You can also do this by finding the bottom left corner of the VSCode window with these symbols: ><. Click and select "Reopen in Container".
- If for any reason you ever need to Rebuild or access any other commands, navigate to the search bar at the top of the VSCode window and type ">" to search for a command. Click to execute.
Final Terminal Output with Newly Generated figure.jpg
IDNNs is a python library that implements training and calculating of information in deep neural networks [Shwartz-Ziv & Tishby, 2017] in TensorFlow. The library allows you to investigate how networks look on the information plane and how it changes during the learning.
- tensorflow r1.0 or higher version
- numpy 1.11.0
- matplotlib 2.0.2
- multiprocessing
- joblib
All the code is under the idnns/
directory.
For training a network and calculate the MI and the gradients of it run the an example in main.py.
Off course you can also run only specific methods for running only the training procedure/calculating the MI.
This file has command-line arguments as follow -
start_samples
- The number of the first sample for calculate the informationbatch_size
- The size of the batchlearning_rate
- The learning rate of the networknum_repeat
- The number of times to run the networknum_epochs
- maximum number of epochs for trainingnet_arch
- The architecture of the networksper_data
- The percent of the training dataname
- The name for saving the resultsdata_name
- The dataset namenum_samples
- The max number of indexes for calculate the informationsave_ws
- True if we want to save the outputs of the networkcalc_information
- 1 if we want to calculate the MI of the networksave_grads
- True if we want to save the gradients of the networkrun_in_parallel
- True if we want to run all the networks in parallel modenum_of_bins
- The number of bins that we divide the neurons' outputactivation_function
- The activation function of the model 0 for thnh 1 for RelU'interval_accuracy_display
- The interval for display accuracyinterval_information_display
- The interval for display the information calculationcov_net
- True if we want covnetrand_labels
- True if we want to set random labelsdata_dir
- The directory for finding the data The results are save under the folder jobs. Each run create a directory with a name that contains the run properties. In this directory there are the data.pickle file with the data of run and python file that is a copy of the file that create this run. The data is under the data directory.
For plotting the results we have the file plot_figures.py. This file contains methods for plotting diffrent aspects of the data (the information plane, the gradients,the norms, etc).
- Ravid. Shwartz-Ziv, Naftali Tishby, Opening the Black Box of Deep Neural Networks via Information, 2017, Arxiv.