Skip to content

Commit

Permalink
Added instructions for colab
Browse files Browse the repository at this point in the history
  • Loading branch information
Anna V. Kozlova committed Aug 25, 2018
1 parent 2e75054 commit 50dbd59
Showing 1 changed file with 3 additions and 1 deletion.
4 changes: 3 additions & 1 deletion week3/week3-Embeddings.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@
"\n",
"To solve the problem, you will use two different models of embeddings:\n",
"\n",
" - [Pre-trained word vectors](https://code.google.com/archive/p/word2vec/) from Google which were trained on a part of Google News dataset (about 100 billion words). The model contains 300-dimensional vectors for 3 million words and phrases. You need to download it by following this [link](https://drive.google.com/file/d/0B7XkCwpI5KDYNlNUTTlSS21pQmM/edit?usp=sharing).\n",
" - [Pre-trained word vectors](https://code.google.com/archive/p/word2vec/) from Google which were trained on a part of Google News dataset (about 100 billion words). The model contains 300-dimensional vectors for 3 million words and phrases. You need to download it by following this [link](https://drive.google.com/file/d/0B7XkCwpI5KDYNlNUTTlSS21pQmM/edit?usp=sharing). If you run this task in Google Colab, you could download the vectors directly (`wget` command) from one of the links mentioned [here](https://groups.google.com/forum/#!topic/word2vec-toolkit/z0Aw5powUco).\n",
" - Representations using StarSpace on StackOverflow data sample. You will need to train them from scratch."
]
},
Expand Down Expand Up @@ -276,6 +276,8 @@
},
"outputs": [],
"source": [
"import nltk\n",
"nltk.download('stopwords')\n",
"from util import array_to_string"
]
},
Expand Down

0 comments on commit 50dbd59

Please sign in to comment.