Skip to content

adhitir/tf_deeplearning_class

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 

Repository files navigation

CPSC 8810: Deep Learning

This repository contains the code base for the CPSC8810: Deep Learning course homeworks.

Author

Adhiti Raman
Ph.D. Candidate
Department of Automotive Engineering
Clemson University
Greenville, ICAR campus

Folder Structre

Homework_1:
    \Part_1:
        HW1_Part1_DNN_func.py
        HW1_Part1_CNN_MNIST.py
    \Part2:
        HW1_Part2_DNN_MNIST_PCA.py
        HW1_Part2_DNN_func_pnorm.py
        HW1_Part2_DNN_MNIST_pnorm.py
        HW1_Part2_DNN_func_gradientzero.py
    \Part3:
        HW1_Part3_DNN_MNIST_randomlabels.py
        HW1_Part3_No_of_params_vs_generalization.py	
        HW1_Part3_DNN_MNIST_flattness_vs_generatlization_part1.py
        HW1_Part3_DNN_MNIST_flattness_vs_generatlization_part2.py
Homework_2:
    utilities.py
    S2VT_final_model.ipynb
    hw2_seq2seq.sh
    hw2_seq2seq_test.py
    output.txt
    LSTM_*
Homework_3:
    DCGAN_final.py
    WGAN_final.py
    DCGAN_final-differentmodel.py
    FID_IS_score_final.py
    \dcgan_final_pics
    \dcgan_final_pics_diffmodel1
    \wgan_final_pics   

Code Breakdown

Homework 1

Part 1

  • HW1_Part1_DNN_func.py: This code deploys 3 DNN models on two functions. The code must be run for each function separately by commenting one out.
#y1 = np.cos(x1*x1) + x1 + np.random.normal(0, 0.1, size=x1.shape)
y1 = np.sin(2*x1) + x1 + np.random.normal(0, 0.1, size=x1.shape)
  • HW1_Part1_CNN_MNIST.py: This code deploys 2 CNN and 1 DNN models on the MNIST dataset and plots loss and accuracy.

Part 2

  • HW1_Part2_DNN_MNIST_PCA.py: This code deploys a DNN model on the MNIST dataset. It collects the weights for the entire model and just the first layer for every 3 epochs and plots the PCA of the same.

  • HW1_Part2_DNN_func_pnorm.py: This code deploys a DNN on a function and plots the gradient norm and loss.

  • HW1_Part2_DNN_MNIST_pnorm.py: This code deploys a DNN on the MNIST dataset and plots the gradient norm and loss.

  • HW1_Part2_DNN_func_gradientzero.py: This code plots the minimum ratio to loss for the DNN model when the gradient norms are close to zero.

Part 3

  • HW1_Part3_DNN_MNIST_randomlabels.py: This code trains a DNN model by randomly changing the values of labels assigned to the training batch.

  • HW1_Part3_No_of_params_vs_generalization.py: This code trains a DNN model ten times, each time by assigning an increasing number of nodes to the layers.

  • HW1_Part3_DNN_MNIST_flattness_vs_generatlization_part1.py: This code records the interpolation between the weights of two models and evaluates a third model designed on the interpolated weights.

  • HW1_Part3_DNN_MNIST_flattness_vs_generatlization_part2.py: This code records the loss, accuracy and sensitivity of 5 models - a mix of CNNs and DNNs on the MNIST database. Each model plots need to be generated by commenting out the others.

Homework 2

./hw2_seq2seq.sh 'MLDS_hw2_1_data/testing_label.json' "MLDS_hw2_1_data/testing_data/feat/{}.npy" 'testset_output.txt'
  • output.txt: for calculating BLEU score. (best score: 0.551) Usage:
python3 MLDS_hw2_1_data/bleu_eval.py output.txt

Homework 3

  • DCGAN_final.py: The DCGAN model
  • WGAN_final.py: This code contains the WGAN model
  • DCGAN_final-differentmodel.py: Thr DCGAN model without batch normalization and different hyper-parameters.
  • FID_IS_score_final.py: This code calculates the FID and IS score. To select between WGAN and DCGAN uncomment the correct image path in the code.

Built With

  • Tensorflow 1.15
  • JupyterHub - Python 3

About

Contains the codes for the class homeworks.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published