From a0a74e49d487bc73da5fc8ad19104efca64cd2ef Mon Sep 17 00:00:00 2001 From: Arseniy Ashuha Date: Wed, 31 Aug 2016 16:12:30 +0300 Subject: [PATCH] init --- .gitignore | 1 + _config.yml | 2 +- index.html | 165 +--------------------------------------------------- 3 files changed, 4 insertions(+), 164 deletions(-) create mode 100644 .gitignore diff --git a/.gitignore b/.gitignore new file mode 100644 index 0000000..5509140 --- /dev/null +++ b/.gitignore @@ -0,0 +1 @@ +*.DS_Store diff --git a/_config.yml b/_config.yml index 3e1fd7b..e66a707 100644 --- a/_config.yml +++ b/_config.yml @@ -1,5 +1,5 @@ # Site settings -title: Maсhine Learning +title: MIPT: Maсhine Learning, part 2 email: ml.course.mipt@gmail.com description: "Moscow Institute of Physics and Technology" baseurl: "" diff --git a/index.html b/index.html index 6f0afb3..02b3004 100644 --- a/index.html +++ b/index.html @@ -4,87 +4,15 @@
- These notes accompany the Stanford CS class CS231n: Convolutional Neural Networks for Visual Recognition. + These notes accompany for the class MIPT: Maсhine Learning, part 2.
- For questions/concerns/bug reports regarding contact Justin Johnson regarding the assignments, or contact Andrej Karpathy regarding the course notes. You can also submit a pull request directly to our git repo. + rusles
- We encourage the use of the hypothes.is extension to annote comments and discuss these notes inline.
-
- - Linear classification: Support Vector Machine, Softmax - -
- parameteric approach, bias trick, hinge loss, cross-entropy loss, L2 regularization, web demo -
-
- -
- - Optimization: Stochastic Gradient Descent - -
- optimization landscapes, local search, learning rate, analytic/numerical gradient -
-
- -
- - Backpropagation, Intuitions - -
- chain rule interpretation, real-valued circuits, patterns in gradient flow -
-
- -
- - Neural Networks Part 1: Setting up the Architecture - -
- model of a biological neuron, activation functions, neural net architecture, representational power -
-
- -
- - Neural Networks Part 2: Setting up the Data and the Loss - -
- preprocessing, weight initialization, batch normalization, regularization (L2/dropout), loss functions -
-
- -
- - Neural Networks Part 3: Learning and Evaluation - -
- gradient checks, sanity checks, babysitting the learning process, momentum (+nesterov), second-order methods, Adagrad/RMSprop, hyperparameter optimization, model ensembles -
-
- - - -
Module 2: Convolutional Neural Networks
- -
- - Convolutional Neural Networks: Architectures, Convolution / Pooling Layers - -
- layers, spatial arrangement, layer patterns, layer sizing patterns, AlexNet/ZFNet/VGGNet case studies, computational considerations -
-
- -
- - Understanding and Visualizing Convolutional Neural Networks - -
- tSNE embeddings, deconvnets, data gradients, fooling ConvNets, human comparisons -
-
- - -