From 131ee48e92fd3ace3532ab17de59b04265a5bf22 Mon Sep 17 00:00:00 2001 From: Kate Sv Date: Thu, 22 Aug 2024 17:40:29 -0700 Subject: [PATCH] (docs) Semrush audit issues (#65) * fix page has 1 links to redirect chains or loops * fix duplicate meta descriptions --- .../intro-semantic-search/further-reading-search.mdx | 2 +- .../classification-using-embeddings.mdx | 4 ++-- .../intro-text-representation/clustering-using-embeddings.mdx | 2 +- .../intro-text-representation/embeddings-visualizing-data.mdx | 2 +- .../semantic-search-using-embeddings.mdx | 2 +- .../llm-university/sandbox/chapter-2-text-embeddings.mdx | 2 +- .../chapter-5-building-a-classifier-with-the-cohere-api.mdx | 2 +- fern/pages/text-embeddings/embeddings.mdx | 2 +- fern/pages/text-generation/documents-and-citations.mdx | 2 +- 9 files changed, 10 insertions(+), 10 deletions(-) diff --git a/fern/pages/llm-university/intro-semantic-search/further-reading-search.mdx b/fern/pages/llm-university/intro-semantic-search/further-reading-search.mdx index f4407f41..cf63ecd5 100644 --- a/fern/pages/llm-university/intro-semantic-search/further-reading-search.mdx +++ b/fern/pages/llm-university/intro-semantic-search/further-reading-search.mdx @@ -8,6 +8,6 @@ updatedAt: "Wed Apr 03 2024 18:49:18 GMT+0000 (Coordinated Universal Time)" --- We hope you enjoyed learning about the different search methods you can use with large language models! If you'd like to delve more into search, please check out the following posts. -- [Text embeddings visually explained](https://txt.cohere.com/text-embeddings/): A visual approach to text embeddings, including applications in search, clustering, and classification. +- [Text embeddings visually explained](https://cohere.com/llmu/text-embeddings/): A visual approach to text embeddings, including applications in search, clustering, and classification. - [Semantic Search](/docs/semantic-search): Build a simple semantic search engine with a dataset of questions. - [Multilingual Movie Search](/page/multilingual-movie-search): Build your own movie search app. As input, you describe what movie you'd like to watch, and the model suggest a list of movies that match that description, in multiple languages. diff --git a/fern/pages/llm-university/intro-text-representation/classification-using-embeddings.mdx b/fern/pages/llm-university/intro-text-representation/classification-using-embeddings.mdx index 0b10e2c2..e7854dbd 100644 --- a/fern/pages/llm-university/intro-text-representation/classification-using-embeddings.mdx +++ b/fern/pages/llm-university/intro-text-representation/classification-using-embeddings.mdx @@ -70,7 +70,7 @@ print(f"Prediction accuracy is {100*score}%") Here we can see that all predictions match the actual classes: -color) +two graphs where all predictions match the actual classes ### Conclusion @@ -78,5 +78,5 @@ As you can see, there are different ways to use the endpoints to do classificati ### Original Source -This material comes from the post Text Embeddings Visually Explained +This material comes from the post Text Embeddings Visually Explained diff --git a/fern/pages/llm-university/intro-text-representation/clustering-using-embeddings.mdx b/fern/pages/llm-university/intro-text-representation/clustering-using-embeddings.mdx index b2f2386e..94787b63 100644 --- a/fern/pages/llm-university/intro-text-representation/clustering-using-embeddings.mdx +++ b/fern/pages/llm-university/intro-text-representation/clustering-using-embeddings.mdx @@ -54,4 +54,4 @@ In this chapter, you learned how to cluster a dataset of sentences, and you obse ### Original Source -This material comes from the post Text Embeddings Visually Explained +This material comes from the post Text Embeddings Visually Explained diff --git a/fern/pages/llm-university/intro-text-representation/embeddings-visualizing-data.mdx b/fern/pages/llm-university/intro-text-representation/embeddings-visualizing-data.mdx index 806f7592..0c3da084 100644 --- a/fern/pages/llm-university/intro-text-representation/embeddings-visualizing-data.mdx +++ b/fern/pages/llm-university/intro-text-representation/embeddings-visualizing-data.mdx @@ -89,4 +89,4 @@ So now that we’ve seen how embeddings can represent text, in the following cha ### Original Source -This material comes from the post Text Embeddings Visually Explained +This material comes from the post Text Embeddings Visually Explained diff --git a/fern/pages/llm-university/intro-text-representation/semantic-search-using-embeddings.mdx b/fern/pages/llm-university/intro-text-representation/semantic-search-using-embeddings.mdx index 6e48a28e..52bdccf0 100644 --- a/fern/pages/llm-university/intro-text-representation/semantic-search-using-embeddings.mdx +++ b/fern/pages/llm-university/intro-text-representation/semantic-search-using-embeddings.mdx @@ -69,4 +69,4 @@ There are many more applications of embeddings, which you'll learn in the follow ### Original Source -This material comes from the post Text Embeddings Visually Explained +This material comes from the post Text Embeddings Visually Explained diff --git a/fern/pages/llm-university/sandbox/chapter-2-text-embeddings.mdx b/fern/pages/llm-university/sandbox/chapter-2-text-embeddings.mdx index 636c9c8c..48852357 100644 --- a/fern/pages/llm-university/sandbox/chapter-2-text-embeddings.mdx +++ b/fern/pages/llm-university/sandbox/chapter-2-text-embeddings.mdx @@ -1,7 +1,7 @@ --- title: "REMOVE Text Embeddings Visually Explained" slug: "docs/chapter-2-text-embeddings" -subtitle: "Post:https://txt.cohere.com/text-embeddings/ And lab https://github.com/cohere-ai/notebooks/blob/main/notebooks/Visualizing_Text_Embeddings.ipynb" +subtitle: "Post:https://cohere.com/llmu/text-embeddings/ And lab https://github.com/cohere-ai/notebooks/blob/main/notebooks/Visualizing_Text_Embeddings.ipynb" hidden: true createdAt: "Tue Apr 25 2023 23:32:39 GMT+0000 (Coordinated Universal Time)" updatedAt: "Mon Oct 23 2023 14:40:59 GMT+0000 (Coordinated Universal Time)" diff --git a/fern/pages/llm-university/sandbox/chapter-5-building-a-classifier-with-the-cohere-api.mdx b/fern/pages/llm-university/sandbox/chapter-5-building-a-classifier-with-the-cohere-api.mdx index d5353cda..3c1778c7 100644 --- a/fern/pages/llm-university/sandbox/chapter-5-building-a-classifier-with-the-cohere-api.mdx +++ b/fern/pages/llm-university/sandbox/chapter-5-building-a-classifier-with-the-cohere-api.mdx @@ -149,7 +149,7 @@ In this section, we’ll look at option number 2, which is how we can utilize th This time, we’ll use the full 800 training data points that we prepared earlier. The first step is to turn the training and test inputs (i.e., the airline queries) into embeddings. -What exactly are embeddings? You can read [more about it in this chapter](https://txt.cohere.com/text-embeddings/), but in summary, it is a set of numbers that represent what a piece of text means, capturing its context and semantics. In our case, we are using the medium model, which generates embeddings of size 2096. So for each text we input, we will get an embedding consisting of 2096 numbers, which looks like the following: +What exactly are embeddings? You can read [more about it in this chapter](https://cohere.com/llmu/text-embeddings/), but in summary, it is a set of numbers that represent what a piece of text means, capturing its context and semantics. In our case, we are using the medium model, which generates embeddings of size 2096. So for each text we input, we will get an embedding consisting of 2096 numbers, which looks like the following: **[0.20641953, 0.35582256, 0.6058123, -0.058944624, 0.8949609, 1.2956009, 1.2408538, -0.89241934, -0.56218493, -0.5521631, -0.11521566, 0.9081634, 1.662983, -0.9293592, -2.3170912, 1.177852, 0.35577637, ... ]** diff --git a/fern/pages/text-embeddings/embeddings.mdx b/fern/pages/text-embeddings/embeddings.mdx index 687a77dd..7c30f6bc 100644 --- a/fern/pages/text-embeddings/embeddings.mdx +++ b/fern/pages/text-embeddings/embeddings.mdx @@ -3,7 +3,7 @@ title: "Introduction to Embeddings at Cohere" slug: "docs/embeddings" hidden: false -description: "Use the API to generate completions, distill text into semantically meaningful vectors, and more. Get state-of-the-art natural language processing without the need for expensive supercomputing infrastructure." +description: "Embeddings convert text into numerical representations to measure similarity, useful for tasks like search and classification, with support for multiple languages and various compression types." image: "../../assets/images/fa074c3-cohere_docs_preview_image_1200x630_copy.jpg" keywords: "vector embeddings, embeddings, natural language processing" diff --git a/fern/pages/text-generation/documents-and-citations.mdx b/fern/pages/text-generation/documents-and-citations.mdx index 635f4517..ce02396c 100644 --- a/fern/pages/text-generation/documents-and-citations.mdx +++ b/fern/pages/text-generation/documents-and-citations.mdx @@ -101,4 +101,4 @@ Also, we can clearly see that it _has_ utilized the document. Our first document Finally, note that the output contains a citations object that tells us not only which documents the model relied upon (with the `"text"` and `“document_ids"` fields), but also the particular part of the claim supported by a particular document (with the `“start”` and `“end”` fields, which are spans that tell us the location of the supported claim inside the reply). This citation object is included because the model was able to use the documents provided, but if it hadn’t been able to do so, no citation object would be present. -You can experiment with RAG in the [chat playground](https://dashboard.cohere.com/playground/?_gl=1*1jqpo29*_ga*MTAxNTg1NTM1MS4xNjk1MjMwODQw*_ga_CRGS116RZS*MTY5NzIxNDAxNC40My4xLjE2OTcyMTQwMTQuNjAuMC4w). +You can experiment with RAG in the [chat playground](https://dashboard.cohere.com/playground/chat).