Skip to content

Commit

Permalink
Fix links
Browse files Browse the repository at this point in the history
  • Loading branch information
Anna V. Kozlova committed Nov 5, 2018
1 parent 09d4b30 commit 7fde864
Showing 1 changed file with 4 additions and 4 deletions.
8 changes: 4 additions & 4 deletions week2/week2-NER.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -411,7 +411,7 @@
"Now, let us specify the layers of the neural network. First, we need to perform some preparatory steps: \n",
" \n",
"- Create embeddings matrix with [tf.Variable](https://www.tensorflow.org/api_docs/python/tf/Variable). Specify its name (*embeddings_matrix*), type (*tf.float32*), and initialize with random values.\n",
"- Create forward and backward LSTM cells. TensorFlow provides a number of RNN cells ready for you. We suggest that you use *BasicLSTMCell*, but you can also experiment with other types, e.g. GRU cells. [This](http://colah.github.io/posts/2015-08-Understanding-LSTMs/) blogpost could be interesting if you want to learn more about the differences.\n",
"- Create forward and backward LSTM cells. TensorFlow provides a number of RNN cells ready for you. We suggest that you use *LSTMCell*, but you can also experiment with other types, e.g. GRU cells. [This](http://colah.github.io/posts/2015-08-Understanding-LSTMs/) blogpost could be interesting if you want to learn more about the differences.\n",
"- Wrap your cells with [DropoutWrapper](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/DropoutWrapper). Dropout is an important regularization technique for neural networks. Specify all keep probabilities using the dropout placeholder that we created before.\n",
" \n",
"After that, you can build the computation graph that transforms an input_batch:\n",
Expand Down Expand Up @@ -515,7 +515,7 @@
},
"source": [
"During training we do not need predictions of the network, but we need a loss function. We will use [cross-entropy loss](http://ml-cheatsheet.readthedocs.io/en/latest/loss_functions.html#cross-entropy), efficiently implemented in TF as \n",
"[cross entropy with logits](https://www.tensorflow.org/api_docs/python/tf/nn/softmax_cross_entropy_with_logits). Note that it should be applied to logits of the model (not to softmax probabilities!). Also note, that we do not want to take into account loss terms coming from `<PAD>` tokens. So we need to mask them out, before computing [mean](https://www.tensorflow.org/api_docs/python/tf/reduce_mean)."
"[cross entropy with logits](https://www.tensorflow.org/api_docs/python/tf/nn/softmax_cross_entropy_with_logits_v2). Note that it should be applied to logits of the model (not to softmax probabilities!). Also note, that we do not want to take into account loss terms coming from `<PAD>` tokens. So we need to mask them out, before computing [mean](https://www.tensorflow.org/api_docs/python/tf/reduce_mean)."
]
},
{
Expand All @@ -529,7 +529,7 @@
"def compute_loss(self, n_tags, PAD_index):\n",
" \"\"\"Computes masked cross-entopy loss with logits.\"\"\"\n",
" \n",
" # Create cross entropy function function (tf.nn.softmax_cross_entropy_with_logits)\n",
" # Create cross entropy function function (tf.nn.softmax_cross_entropy_with_logits_v2)\n",
" ground_truth_tags_one_hot = tf.one_hot(self.ground_truth_tags, n_tags)\n",
" loss_tensor = ######### YOUR CODE HERE #############\n",
" \n",
Expand Down Expand Up @@ -557,7 +557,7 @@
"source": [
"The last thing to specify is how we want to optimize the loss. \n",
"We suggest that you use [Adam](https://www.tensorflow.org/api_docs/python/tf/train/AdamOptimizer) optimizer with a learning rate from the corresponding placeholder. \n",
"You will also need to apply [clipping](https://www.tensorflow.org/api_guides/python/train#Gradient_Clipping) to eliminate exploding gradients. It can be easily done with [clip_by_norm](https://www.tensorflow.org/api_docs/python/tf/clip_by_norm) function. "
"You will also need to apply clipping to eliminate exploding gradients. It can be easily done with [clip_by_norm](https://www.tensorflow.org/api_docs/python/tf/clip_by_norm) function. "
]
},
{
Expand Down

0 comments on commit 7fde864

Please sign in to comment.