From 2699541ffc3e1f2638b15555bfe08b739cc5d134 Mon Sep 17 00:00:00 2001 From: lena-voita Date: Thu, 30 Sep 2021 11:57:59 +0100 Subject: [PATCH] fix typo --- nlp_course.html | 4 ++-- nlp_course/text_classification.html | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/nlp_course.html b/nlp_course.html index 117c8ed..78e6d00 100644 --- a/nlp_course.html +++ b/nlp_course.html @@ -193,7 +193,7 @@

This new format of the course is designed for:

Seminars & Homeworks -
notebooks in our 7k-☆ course repo +
notebooks in our 7.3k-☆ course repo
@@ -256,7 +256,7 @@

Bonus:

Seminars & Homeworks

For each topic, you can take notebooks from - our 7k-☆ course repo. + our 7.3k-☆ course repo.

From 2020, both PyTorch and Tensorflow!

diff --git a/nlp_course/text_classification.html b/nlp_course/text_classification.html index d12a19e..5c34515 100644 --- a/nlp_course/text_classification.html +++ b/nlp_course/text_classification.html @@ -2300,7 +2300,7 @@

P(x|y=k): use the "naive" assum

Here we assume that document \(x\) is represented as a set of features, e.g., a set of its words \((x_1, \dots, x_n)\): - \[P(x| y=k)=P(x_1, \dots, x_n|y).\] + \[P(x| y=k)=P(x_1, \dots, x_n|y=k).\]

The Naive Bayes assumptions are

@@ -2324,7 +2324,7 @@

P(x|y=k): use the "naive" assum

With these "naive" assumptions we get: - \[P(x| y=k)=P(x_1, \dots, x_n|y)=\prod\limits_{t=1}^nP(x_t|y=k).\] + \[P(x| y=k)=P(x_1, \dots, x_n|y=k)=\prod\limits_{t=1}^nP(x_t|y=k).\] The probabilities \(P(x_i|y=k)\) are estimated as the proportion of times the word \(x_i\) appeared in documents of class \(k\) among all tokens in these documents: