Skip to content

Commit

Permalink
(Docs)Old TXT Blog Links with Updated Links (#68)
Browse files Browse the repository at this point in the history
  • Loading branch information
lampa-da authored Aug 24, 2024
1 parent 87c4862 commit b4e0dec
Show file tree
Hide file tree
Showing 61 changed files with 82 additions and 82 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -17,4 +17,4 @@ Command R is a generative model optimized for long context tasks such as retriev
- Strong capabilities across 10 key languages
- Model weights available on HuggingFace for research and evaluation

For more information, check out the [official blog post](https://txt.cohere.com/command-r/) or the [Command R documentation](/docs/command-r).
For more information, check out the [official blog post](https://cohere.com/blog/command-r/) or the [Command R documentation](/docs/command-r).
2 changes: 1 addition & 1 deletion fern/pages/cookbooks/creating-a-qa-bot.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -123,7 +123,7 @@ The vector database we built using `VectorStoreIndex` comes with an in-built ret
retriever = index.as_retriever(similarity_top_k=top_k)
```

We recently released [Rerank-3](https://txt.cohere.com/rerank-3/) (April '24), which we can use to improve the quality of retrieval, as well as reduce latency and the cost of inference. To use the retriever with `rerank`, we create a thin wrapper around `index.as_retriever` as follows:
We recently released [Rerank-3](https://cohere.com/blog/rerank-3/) (April '24), which we can use to improve the quality of retrieval, as well as reduce latency and the cost of inference. To use the retriever with `rerank`, we create a thin wrapper around `index.as_retriever` as follows:

```python PYTHON
class RetrieverWithRerank:
Expand Down
2 changes: 1 addition & 1 deletion fern/pages/cookbooks/document-parsing-for-enterprises.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ The bread and butter of natural language processing technology is text. Once we

In the context of enterprise Retrieval Augmented Generation (RAG), the information is often locked in complex file types such as PDFs. These formats are made for sharing information between humans, but not so much with language models.

In this notebook, we will use a real-world pharmaceutical drug label to test out various performant approaches to parsing PDFs. This will allow us to use [Cohere's Command-R model](https://txt.cohere.com/command-r/) in a RAG setting to answer questions and asks about this label, such as "I need a succinct summary of the compound name, indication, route of administration, and mechanism of action of" a given pharmaceutical.
In this notebook, we will use a real-world pharmaceutical drug label to test out various performant approaches to parsing PDFs. This will allow us to use [Cohere's Command-R model](https://cohere.com/blog/command-r/) in a RAG setting to answer questions and asks about this label, such as "I need a succinct summary of the compound name, indication, route of administration, and mechanism of action of" a given pharmaceutical.

<img
alt="Document Parsing Result"
Expand Down
2 changes: 1 addition & 1 deletion fern/pages/cookbooks/fueling-generative-content.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ import { CookbookHeader } from "../../components/cookbook-header";

Generative models have proven extremely useful in content idea generation. But they don’t take into account user search demand and trends. In this notebook, let’s see how we can solve that by adding keyword research into the equation.

Read the accompanying [blog post here](https://txt.cohere.ai/generative-content-keyword-research/).
Read the accompanying [blog post here](https://cohere.com/blog/generative-content-keyword-research/).

```python PYTHON
! pip install cohere -q
Expand Down
2 changes: 1 addition & 1 deletion fern/pages/cookbooks/hello-world-meet-ai.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ import { CookbookHeader } from "../../components/cookbook-header";

Here we take a quick tour of what’s possible with language AI via Cohere’s Large Language Model (LLM) API. This is the Hello, World! of language AI, written for developers with little or no background in AI. In fact, we’ll do that by exploring the Hello, World! phrase itself.

Read the accompanying [blog post here](https://txt.cohere.ai/hello-world-p1/).
Read the accompanying [blog post here](https://cohere.com/blog/hello-world-p1/).

<img
alt="Hello World! Meet Language AI"
Expand Down
2 changes: 1 addition & 1 deletion fern/pages/cookbooks/multilingual-search.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ import { CookbookHeader } from "../../components/cookbook-header";

<CookbookHeader href="https://github.com/cohere-ai/notebooks/blob/main/notebooks/guides/Multilingual_Search_with_Cohere_and_Langchain.ipynb" />

**_Read the accompanying [blog post here](https://txt.cohere.ai/search-cohere-langchain/)._**
**_Read the accompanying [blog post here](https://cohere.com/blog/search-cohere-langchain/)._**

This notebook contains two examples for performing multilingual search using Cohere and Langchain. Langchain is a library that assists the development of applications built on top of large language models (LLMs), such as Cohere's models.

Expand Down
4 changes: 2 additions & 2 deletions fern/pages/cookbooks/wikipedia-semantic-search.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ import { CookbookHeader } from "../../components/cookbook-header";

<CookbookHeader href="https://github.com/cohere-ai/notebooks/blob/main/notebooks/Wikipedia_Semantic_Search_With_Cohere_Embeddings_Archives.ipynb" />

This notebook contains the starter code to do simple [semantic search](https://txt.cohere.ai/what-is-semantic-search/) on the [Wikipedia embeddings archives](https://txt.cohere.ai/embedding-archives-wikipedia/) published by Cohere. These archives embed Wikipedia sites in multiple languages. In this example, we'll use [Wikipedia Simple English](https://huggingface.co/datasets/Cohere/wikipedia-22-12-simple-embeddings).
This notebook contains the starter code to do simple [semantic search](https://cohere.com/llmu/what-is-semantic-search/) on the [Wikipedia embeddings archives](https://cohere.com/blog/embedding-archives-wikipedia/) published by Cohere. These archives embed Wikipedia sites in multiple languages. In this example, we'll use [Wikipedia Simple English](https://huggingface.co/datasets/Cohere/wikipedia-22-12-simple-embeddings).

Let's now download 1,000 records from the English Wikipedia embeddings archive so we can search it afterwards.

Expand Down Expand Up @@ -42,7 +42,7 @@ Downloading: 0%| | 0.00/1.29k [00:00<?, ?B/s]
Using custom data configuration Cohere--wikipedia-22-12-simple-embeddings-94deea3d55a22093
```

Now, `doc_embeddings` holds the embeddings of the first 1,000 documents in the dataset. Each document is represented as an [embeddings vector](https://txt.cohere.ai/sentence-word-embeddings/) of 768 values.
Now, `doc_embeddings` holds the embeddings of the first 1,000 documents in the dataset. Each document is represented as an [embeddings vector](https://cohere.com/llmu/sentence-word-embeddings/) of 768 values.

```python PYTHON
doc_embeddings.shape
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ The number shown in the metrics are the [macro-weighted average](https://datasc

For recall, precision, and F1, the number in the pill is a proxy for how much improvement was observed when the default model was trained on your dataset.

You can see the detailed calculations to evaluate Classify models in this [blog post](https://txt.cohere.ai/classification-eval-metrics/).
You can see the detailed calculations to evaluate Classify models in this [blog post](https://cohere.com/blog/classification-eval-metrics/).

#### Multilabel F1

Expand Down
4 changes: 2 additions & 2 deletions fern/pages/get-started/the-cohere-platform.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ Cohere allows developers and enterprises to build LLM-powered applications. We d

The Command family of models includes [Command](https://cohere.com/models/command), [Command R](/docs/command-r), and [Command R+](/docs/command-r-plus). Together, they are the text-generation LLMs powering conversational agents, summarization, copywriting, and similar use cases. They work through the [Chat](/reference/chat) endpoint, which can be used with or without [retrieval augmented generation](/docs/retrieval-augmented-generation-rag) RAG.

[Rerank](https://txt.cohere.com/rerank/) is the fastest way to inject the intelligence of a language model into an existing search system. It can be accessed via the [Rerank](/reference/rerank-1) endpoint.
[Rerank](https://cohere.com/blog/rerank/) is the fastest way to inject the intelligence of a language model into an existing search system. It can be accessed via the [Rerank](/reference/rerank-1) endpoint.

[Embed](https://cohere.com/models/embed) improves the accuracy of search, classification, clustering, and RAG results. It also powers the [Embed](/reference/embed) and [Classify](/reference/classify) endpoints.

Expand Down Expand Up @@ -68,7 +68,7 @@ To [create a fine-tuned model](/docs/fine-tuning), simply upload a dataset and h
Depending on your privacy/security requirements there are a number of ways to access Cohere:

- [Cohere API](/reference/about): this is the easiest option, simply grab an API key from [the dashboard](https://dashboard.cohere.com/) and start using the models hosted by Cohere.
- Cloud AI platforms: this option offers a balance of ease-of-use and security. you can access Cohere on various cloud AI platforms such as [Oracle's GenAI Service](https://www.oracle.com/uk/artificial-intelligence/generative-ai/large-language-models/), AWS' [Bedrock](https://aws.amazon.com/bedrock/cohere-command-embed/) and [Sagemaker](https://aws.amazon.com/blogs/machine-learning/cohere-brings-language-ai-to-amazon-sagemaker/) platforms, [Google Cloud](https://console.cloud.google.com/marketplace/product/cohere-id-public/cohere-public?ref=txt.cohere.com), and [Azure's AML service](https://txt.cohere.com/coheres-enterprise-ai-models-coming-soon-to-microsoft-azure-ai-as-a-managed-service/).
- Cloud AI platforms: this option offers a balance of ease-of-use and security. you can access Cohere on various cloud AI platforms such as [Oracle's GenAI Service](https://www.oracle.com/uk/artificial-intelligence/generative-ai/large-language-models/), AWS' [Bedrock](https://aws.amazon.com/bedrock/cohere-command-embed/) and [Sagemaker](https://aws.amazon.com/blogs/machine-learning/cohere-brings-language-ai-to-amazon-sagemaker/) platforms, [Google Cloud](https://console.cloud.google.com/marketplace/product/cohere-id-public/cohere-public?ref=txt.cohere.com), and [Azure's AML service](https://cohere.com/blog/coheres-enterprise-ai-models-coming-soon-to-microsoft-azure-ai-as-a-managed-service/).
- Private cloud deploy deployments: Cohere's models can be deployed privately in most virtual private cloud (VPC) environments, offering enhanced security and highest degree of customization. Please [contact sales](emailto:[email protected]) for information.

<img src="../../assets/images//2ce36b1-Visual_8.png" alt="The major cloud providers." />
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ hidden: true
createdAt: "Wed May 03 2023 02:07:08 GMT+0000 (Coordinated Universal Time)"
updatedAt: "Thu Apr 18 2024 10:45:10 GMT+0000 (Coordinated Universal Time)"
---
### <a target="_blank" href="https://txt.cohere.ai/search-cohere-langchain/?__hstc=14363112.fb39cf5aec47995e64cd26603e2e04d9.1682489949734.1683573460217.1683583024589.35&amp;__hssc=14363112.1.1683583024589&amp;__hsfp=3887566761">Semantic Search With Cohere and Langchain</a>
### <a target="_blank" href="https://cohere.com/blog/search-cohere-langchain/?__hstc=14363112.fb39cf5aec47995e64cd26603e2e04d9.1682489949734.1683573460217.1683583024589.35&amp;__hssc=14363112.1.1683583024589&amp;__hsfp=3887566761">Semantic Search With Cohere and Langchain</a>

Use the embed endpoint with Langchain to efficiently build semantic search applications on top of Cohere’s multilingual model.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -205,4 +205,4 @@ Follow along, as in Module 3 of this course, you'll be able to build semantic se

### Original Source

This material comes from the post <a target="_blank" href="https://txt.cohere.com/what-is-semantic-search/">What is Semantic Search?</a>
This material comes from the post <a target="_blank" href="https://cohere.com/llmu/what-is-semantic-search/">What is Semantic Search?</a>
Original file line number Diff line number Diff line change
Expand Up @@ -165,4 +165,4 @@ In the previous chapter, we learned that sentence embeddings are the bread and b

### Original Source

This material comes from the post <a target="_blank" href="https://txt.cohere.com/what-is-similarity-between-sentences/">What is Similarity Between Sentences?</a>
This material comes from the post <a target="_blank" href="https://cohere.com/llmu/what-is-similarity-between-sentences/">What is Similarity Between Sentences?</a>
Original file line number Diff line number Diff line change
Expand Up @@ -213,4 +213,4 @@ In the previous chapter, we learned that sentence embeddings are the bread and b

### Original Source

This material comes from the post <a target="_blank" href="https://txt.cohere.com/what-is-similarity-between-sentences/">What is Similarity Between Sentences?</a>
This material comes from the post <a target="_blank" href="https://cohere.com/llmu/what-is-similarity-between-sentences/">What is Similarity Between Sentences?</a>
Original file line number Diff line number Diff line change
Expand Up @@ -138,4 +138,4 @@ Sentence embeddings can be extended to language embeddings, in which the numbers

### Original Source

This material comes from the post <a target="_blank" href="https://txt.cohere.com/sentence-word-embeddings/">What Are Word and Sentence Embeddings?</a>
This material comes from the post <a target="_blank" href="https://cohere.com/llmu/sentence-word-embeddings/">What Are Word and Sentence Embeddings?</a>
Original file line number Diff line number Diff line change
Expand Up @@ -160,4 +160,4 @@ In this post, you learned what attention mechanisms are. They are a very useful

### Original Source

This material comes from the post [What is Attention in Language Models?](https://txt.cohere.com/what-is-attention-in-language-models/)
This material comes from the post [What is Attention in Language Models?](https://cohere.com/llmu/what-is-attention-in-language-models/)
Original file line number Diff line number Diff line change
Expand Up @@ -195,4 +195,4 @@ The repetition of these steps is what writes the amazing text you’ve seen tran

### Original Source

This material comes from the post <a target="_blank" href="https://txt.cohere.com/what-are-transformer-models/">What Are Transformer Models and How Do They Work?</a>
This material comes from the post <a target="_blank" href="https://cohere.com/llmu/what-are-transformer-models/">What Are Transformer Models and How Do They Work?</a>
Original file line number Diff line number Diff line change
Expand Up @@ -74,4 +74,4 @@ In this chapter, you learned the basic four metrics to evaluate classification m

### Original Source

This material comes from the posts <a target="_blank" href="https://txt.cohere.com/text-classification-use-cases/">Text Classification Intuition for Software Developers</a> and <a target="_blank" href="https://txt.cohere.com/classification-eval-metrics/">Classification Evaluation Metrics: Accuracy, Precision, Recall, and F1 Visually Explained</a>.
This material comes from the posts <a target="_blank" href="https://cohere.com/blog/text-classification-use-cases/">Text Classification Intuition for Software Developers</a> and <a target="_blank" href="https://cohere.com/blog/classification-eval-metrics/">Classification Evaluation Metrics: Accuracy, Precision, Recall, and F1 Visually Explained</a>.
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ The solution is, instead of prompting the same question to the model once, we ca

Let’s look at an example taken from a paper by [Wang et al., 2023](https://arxiv.org/abs/2203.11171?ref=txt.cohere.com) that introduces the concept of _self -consistency_.

First, revisiting [the previous chapter](https://txt.cohere.com/constructing-prompts/#chain-of-thought), we looked at the concept of _chain-of-thought prompting_ introduced by [Wei et. al, 2023](https://arxiv.org/abs/2201.11903?ref=txt.cohere.com), where a model is prompted in such a way that it is encouraged to do a reasoning step before giving the final response. In those settings, however, the model is typically encouraged to do “greedy decoding,” which means biasing towards the correct and safe path. This can be done by adjusting settings like the temperature value.
First, revisiting [the previous chapter](https://cohere.com/llmu/constructing-prompts/#chain-of-thought), we looked at the concept of _chain-of-thought prompting_ introduced by [Wei et. al, 2023](https://arxiv.org/abs/2201.11903?ref=txt.cohere.com), where a model is prompted in such a way that it is encouraged to do a reasoning step before giving the final response. In those settings, however, the model is typically encouraged to do “greedy decoding,” which means biasing towards the correct and safe path. This can be done by adjusting settings like the temperature value.

With self-consistency, we can build on the chain-of-thought approach by sampling from several paths instead of one. We also make the paths much more diverse by adjusting the settings towards being more “creative,” again using settings like temperature. We then do a majority vote out of all answers.

Expand Down Expand Up @@ -157,4 +157,4 @@ This is a fascinating area of prompt engineering because it opens up so much roo

### Original Source

This material comes from the post: [Chaining Prompts for the Command Model](https://txt.cohere.com/chaining-prompts/).
This material comes from the post: [Chaining Prompts for the Command Model](https://cohere.com/llmu/chaining-prompts/).
Original file line number Diff line number Diff line change
Expand Up @@ -141,7 +141,7 @@ While LLMs excel in text generation tasks, they struggle in context-aware scenar

In real applications, being able to add context to a prompt is key because this is what enables personalized generative AI for a team or company. It makes many use cases possible, such as intelligent assistants, customer support, and productivity tools, that retrieve the right information from a wide range of sources and add it to the prompt.

This is a whole topic on its own, but to provide some idea, [this demo](https://txt.cohere.com/search-cohere-langchain/#example-2-search-based-question-answering) shows an example of information retrieval in action. In this article though, we’ll assume that the right information is already retrieved and added to the prompt.
This is a whole topic on its own, but to provide some idea, [this demo](https://cohere.com/blog/search-cohere-langchain/#example-2-search-based-question-answering) shows an example of information retrieval in action. In this article though, we’ll assume that the right information is already retrieved and added to the prompt.

Here’s an example where we ask the model to list the features of the CO-1T wireless headphone without any additional context:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -209,5 +209,5 @@ Ultimately, each evaluation approach has its potential pitfalls. An evaluation o

### Original Source

This material comes from the post: [Evaluating LLM Outputs](https://txt.cohere.com/evaluating-llm-outputs/).
This material comes from the post: [Evaluating LLM Outputs](https://cohere.com/llmu/evaluating-llm-outputs/).

Original file line number Diff line number Diff line change
Expand Up @@ -99,7 +99,7 @@ As you can see, dense retrieval did much better than keyword search here. The se

### Searching in Other Languages

As you may have noticed, the `dense_retrieval` function has a parameter called `results_lang` (see [code lab](https://colab.research.google.com/github/cohere-ai/notebooks/blob/main/notebooks/End_To_End_Wikipedia_Search.ipynb#scrollTo=kbeNQtzAMagI&line=2&uniqifier=1)). This parameter determines the language in which the search results are outputted. It is defaulted to English ('en') , but for this demo, it can also be set to German ('de'), French ('fr'), Spanish ('es'), Italian ('it'), Japanese ('ja'), Arabic ('ar'), (Simplified) Chinese ('zh'), Korean ('ko'), and Hindi ('hi'). However, the Cohere multilingual embedding handles [over 100 languages](https://txt.cohere.com/multilingual/).
As you may have noticed, the `dense_retrieval` function has a parameter called `results_lang` (see [code lab](https://colab.research.google.com/github/cohere-ai/notebooks/blob/main/notebooks/End_To_End_Wikipedia_Search.ipynb#scrollTo=kbeNQtzAMagI&line=2&uniqifier=1)). This parameter determines the language in which the search results are outputted. It is defaulted to English ('en') , but for this demo, it can also be set to German ('de'), French ('fr'), Spanish ('es'), Italian ('it'), Japanese ('ja'), Arabic ('ar'), (Simplified) Chinese ('zh'), Korean ('ko'), and Hindi ('hi'). However, the Cohere multilingual embedding handles [over 100 languages](https://cohere.com/blog/multilingual/).

For the first example, let's search for results to the English query "Who was the first person to win two Nobel prizes" in Arabic. The line of code is the following:

Expand Down
Loading

0 comments on commit b4e0dec

Please sign in to comment.