Skip to content

Commit

Permalink
(docs)Remove broken links (#118)
Browse files Browse the repository at this point in the history
  • Loading branch information
lampa-da authored Sep 12, 2024
1 parent a4b84c5 commit e2a4d62
Show file tree
Hide file tree
Showing 20 changed files with 30 additions and 28 deletions.
8 changes: 4 additions & 4 deletions cohere-openapi.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5602,7 +5602,7 @@ paths:
items:
$ref: "#/components/schemas/ChatConnector"
description: |
Accepts `{"id": "web-search"}`, and/or the `"id"` for a custom [connector](https://docs.cohere.com/docs/connectors), if you've [created](https://docs.cohere.com/docs/creating-and-deploying-a-connector) one.
Accepts `{"id": "web-search"}`, and/or the `"id"` for a custom [connector](https://docs.cohere.com/docs/overview-rag-connectors), if you've [created](https://docs.cohere.com/docs/creating-and-deploying-a-connector) one.

When specified, the model's reply will be enriched with information found by querying each of the connectors (RAG).

Expand Down Expand Up @@ -7407,7 +7407,7 @@ paths:
description: |
The maximum number of tokens the model will generate as part of the response. Note: Setting a low value may result in incomplete generations.

This parameter is off by default, and if it's not specified, the model will continue generating until it emits an EOS completion token. See [BPE Tokens](/bpe-tokens-wiki) for more details.
This parameter is off by default, and if it's not specified, the model will continue generating until it emits an EOS completion token.

Can only be set to `0` if `return_likelihoods` is set to `ALL` to get the likelihood of the prompt.
writeOnly: true
Expand Down Expand Up @@ -13547,7 +13547,7 @@ paths:
tags:
- /connectors
description: Retrieve a connector by ID. See
['Connectors'](https://docs.cohere.com/docs/connectors) for more
['Connectors'](https://docs.cohere.com/docs/overview-rag-connectors) for more
information.
operationId: get-connector
x-fern-sdk-method-name: get
Expand Down Expand Up @@ -13860,7 +13860,7 @@ paths:
tags:
- /connectors
description: Delete a connector by ID. See
['Connectors'](https://docs.cohere.com/docs/connectors) for more
['Connectors'](https://docs.cohere.com/docs/overview-rag-connectors) for more
information.
operationId: delete-connector
x-fern-sdk-method-name: delete
Expand Down
13 changes: 8 additions & 5 deletions fern/docs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -169,7 +169,7 @@ redirects:
destination: /reference/about
permanent: true
- source: /bpe-tokens-wiki
destination: /docs/tokens
destination: /docs/tokens-and-tokenizers
permanent: true
- source: /classify-content-mod
destination: /docs/content-moderation-with-classify
Expand Down Expand Up @@ -304,7 +304,7 @@ redirects:
destination: /docs/text-classification-with-classify
permanent: true
- source: /text-classification-embeddings
destination: /docs/text-classification-with-embed
destination: /page/text-classification-using-embeddings
permanent: true
- source: /text-summarization-example
destination: /docs/text-summarization
Expand Down Expand Up @@ -340,7 +340,7 @@ redirects:
destination: /reference/about
permanent: true
- source: /bpe-tokens-wiki
destination: /docs/tokens
destination: /docs/tokens-and-tokenizers
permanent: true
- source: /classify-content-mod
destination: /docs/content-moderation-with-classify
Expand Down Expand Up @@ -490,10 +490,10 @@ redirects:
destination: /docs/text-classification-with-classify
permanent: true
- source: /text-classification-embeddings
destination: /docs/text-classification-with-embed
destination: /page/text-classification-using-embeddings
permanent: true
- source: /text-classification-embeddings/
destination: /docs/text-classification-with-embed
destination: /page/text-classification-using-embeddings
permanent: true
- source: /text-summarization-example
destination: /docs/text-summarization
Expand Down Expand Up @@ -546,6 +546,9 @@ redirects:
- source: /docs/fine-tuning-with-the-web-ui
destination: /docs/fine-tuning-with-the-cohere-dashboard
permanent: true
- source: /docs/connectors
destination: /docs/overview-rag-connectors
permanent: true

analytics:
segment:
Expand Down
2 changes: 1 addition & 1 deletion fern/pages/api-reference/connectors-1/delete-connector.mdx
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
---
title: "Delete a Connector"
slug: "delete-connector"
subtitle: "Delete a connector by ID. See ['Connectors'](/docs/connectors) for more information."
subtitle: "Delete a connector by ID. See ['Connectors'](/docs/overview-rag-connectors) for more information."
hidden: false
createdAt: "Mon Jun 24 2024 13:21:35 GMT+0000 (Coordinated Universal Time)"
updatedAt: "Mon Jun 24 2024 13:21:37 GMT+0000 (Coordinated Universal Time)"
Expand Down
2 changes: 1 addition & 1 deletion fern/pages/api-reference/connectors-1/get-connector.mdx
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
---
title: "Get a Connector"
slug: "get-connector"
subtitle: "Retrieve a connector by ID. See ['Connectors'](/docs/connectors) for more information."
subtitle: "Retrieve a connector by ID. See ['Connectors'](/docs/overview-rag-connectors) for more information."
hidden: false
createdAt: "Mon Jun 24 2024 13:21:35 GMT+0000 (Coordinated Universal Time)"
updatedAt: "Mon Jun 24 2024 13:21:36 GMT+0000 (Coordinated Universal Time)"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,4 +6,4 @@ createdAt: "Fri Aug 05 2022 17:46:00 GMT+0000 (Coordinated Universal Time)"
hidden: false
description: "Use the API to generate completions, distill text into semantically meaningful vectors, and more. Get state-of-the-art natural language processing without the need for expensive supercomputing infrastructure."
---
Use [Moderate (Beta)](/moderate-reference) to classify harmful text across the following categories: `profane`, `hate speech`, `violence`, `self-harm`, `sexual`, `sexual (non-consenual)`, `harassment`, `spam`, `information hazard (e.g., pii)`. Moderate returns an array containing each category and its associated confidence score. Over the coming weeks, expect performance to improve significantly as we optimize the underlying model.
Use Moderate (Beta) to classify harmful text across the following categories: `profane`, `hate speech`, `violence`, `self-harm`, `sexual`, `sexual (non-consenual)`, `harassment`, `spam`, `information hazard (e.g., pii)`. Moderate returns an array containing each category and its associated confidence score. Over the coming weeks, expect performance to improve significantly as we optimize the underlying model.
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ hidden: false
---
**We're Releasing co.chat() and the Chat + RAG Playground**

We're pleased to announce that [we've released](/docs/cochat-beta) our `co.chat()` beta! Of particular importance is the fact that the `co.chat()` API is able to utilize [retrieval augmented generation](/docs/retrieval-augmented-generation-rag) (RAG), meaning developers can provide sources of context that inform and ground the model's output.
We're pleased to announce that we've released our `co.chat()` beta! Of particular importance is the fact that the `co.chat()` API is able to utilize [retrieval augmented generation](/docs/retrieval-augmented-generation-rag) (RAG), meaning developers can provide sources of context that inform and ground the model's output.

This represents a leap forward in the accuracy, verifiability, and timeliness of our generative AI offering. For our public beta, developers can connect `co.chat()` to web search or plain text documents.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ hidden: false

One of the most exciting applications of generative AI is known as ["retrieval augmented generation"](/docs/retrieval-augmented-generation-rag) (RAG). This refers to the practice of _grounding_ the outputs of a large language model (LLM) by offering it resources -- like your internal technical documentation, chat logs, etc. -- from which to draw as it formulates its replies.

Cohere has made it much easier to utilize RAG in bespoke applications via [Connectors](/docs/connectors). As the name implies, Connectors allow you to _connect_ Cohere's generative AI platform up to whatever resources you'd like it to ground on, facilitating the creation of a wide variety of applications -- customer service chatbots, internal tutors, or whatever else you want to build.
Cohere has made it much easier to utilize RAG in bespoke applications via [Connectors](/docs/overview-rag-connectors). As the name implies, Connectors allow you to _connect_ Cohere's generative AI platform up to whatever resources you'd like it to ground on, facilitating the creation of a wide variety of applications -- customer service chatbots, internal tutors, or whatever else you want to build.

Our docs cover how to [create and deploy connectors](/docs/creating-and-deploying-a-connector), [how to manage your connectors ](/docs/managing-your-connector), [how to handle authentication](/docs/connector-authentication), and [more](/docs/connector-faqs)!

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,6 @@ As of today, tool use will now be multi-step by default. Here are some resources

- Check out our [multi-step tool use guide](/docs/multi-step-tool-use).
- Experiment with multi-step tool use with [this notebook](https://github.com/cohere-ai/notebooks/blob/main/notebooks/agents/Tool_Use.ipynb).
- To update from single-step to multi-step, follow our [migration guide](/page/changes-in-chat-api-and-tool-use).

## We've published additional docs!

Expand Down
2 changes: 1 addition & 1 deletion fern/pages/changelog/release-notes.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ hidden: true
### August 5th, 2022

**Introducing Moderate (Beta)**
Use [Moderate (Beta)](/moderate-reference) to classify harmful text across the following categories: `profane`, `hate speech`, `violence`, `self-harm`, `sexual`, `sexual (non-consenual)`, `harassment`, `spam`, `information hazard (e.g., pii)`. Moderate returns an array containing each category and its associated confidence score. Over the coming weeks, expect performance to improve significantly as we optimize the underlying model.
Use Moderate (Beta) to classify harmful text across the following categories: `profane`, `hate speech`, `violence`, `self-harm`, `sexual`, `sexual (non-consenual)`, `harassment`, `spam`, `information hazard (e.g., pii)`. Moderate returns an array containing each category and its associated confidence score. Over the coming weeks, expect performance to improve significantly as we optimize the underlying model.

### July 20th, 2022

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ The two basic parts of a RAG workflow are "retrieval" and "augmented generation.

A developer can upload a PDF (to be parsed using LlamaIndex), or retrieve information from Wikipedia (using Langchain’s `WikipediaRetriever`).

You can also optionally configure your web app to utilize vector database retrieval or [Cohere Connectors](/docs/connectors).
You can also optionally configure your web app to utilize vector database retrieval or [Cohere Connectors](/docs/overview-rag-connectors).

### Model configurations

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -250,7 +250,7 @@ create_response = co.finetuning.create_finetuned_model(

## Calling your Chat Model with co.chat()

Once your model completes training, you can call it via [co.chat()](/docs/cochat-beta) and pass your custom model's `model_id`.
Once your model completes training, you can call it via [co.chat()](/docs/chat-api) and pass your custom model's `model_id`.

Please note, the `model_id` is the `id` returned by the fine-tuned model object with the `"-ft"` suffix.

Expand Down
2 changes: 1 addition & 1 deletion fern/pages/get-started/the-cohere-platform.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ Embeddings enable you to search based on what a phrase _means_ rather than simpl

<img src="../../assets/images//04fe094-Visual_6.png" alt="How a query returns results." />

Learn more about semantic search [here](/docs/intro-semantic-search).
Learn more about semantic search [here](https://cohere.com/llmu/what-is-semantic-search).

## Create Fine-Tuned Models with Ease

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ Running Cohere Chat with LangChain doesn't require many prerequisites, consult t

### Cohere Chat with LangChain

To use [Cohere chat](/docs/cochat-beta) with LangChain, simply create a [ChatCohere](https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/chat_models/cohere.py) object and pass in the message or message history. In the example below, you will need to add your Cohere API key.
To use [Cohere chat](/docs/chat-api) with LangChain, simply create a [ChatCohere](https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/chat_models/cohere.py) object and pass in the message or message history. In the example below, you will need to add your Cohere API key.

```python PYTHON
from langchain_community.chat_models import ChatCohere
Expand Down Expand Up @@ -149,7 +149,7 @@ print(citations)

#### Using a Connector

In this example, we create a generation with a [connector](/docs/connectors) which allows us to get a generation with citations to results from the connector. We use the "web-search" connector, which is available to everyone. But if you have created your own connector in your org you can pass in its id, like so: `rag = CohereRagRetriever(llm=cohere_chat_model, connectors=[{"id": "example-connector-id"}])`
In this example, we create a generation with a [connector](/docs/overview-rag-connectors) which allows us to get a generation with citations to results from the connector. We use the "web-search" connector, which is available to everyone. But if you have created your own connector in your org you can pass in its id, like so: `rag = CohereRagRetriever(llm=cohere_chat_model, connectors=[{"id": "example-connector-id"}])`

Here's a code sample illustrating how to use a connector:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ To use Cohere and Haystack you will need:

### Cohere Chat with Haystack

Haystack’s `CohereChatGenerator` component enables chat completion using Cohere's large language models (LLMs). For the latest information on Cohere Chat [see these docs](/docs/cochat-beta).
Haystack’s `CohereChatGenerator` component enables chat completion using Cohere's large language models (LLMs). For the latest information on Cohere Chat [see these docs](/docs/chat-api).

In the example below, you will need to add your Cohere API key. We suggest using an environment variable, `COHERE_API_KEY`. Don’t commit API keys to source control!

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ The plot below shows the clusters that the algorithm returned. It looks to be sp

### Conclusion

In this chapter, you learned how to cluster a dataset of sentences, and you observed that each cluster corresponds to a particular topic. If you'd like to dive deeper into clustering, feel free to check this more elaborate example on <a target="_blank" href="/docs/clustering-hacker-news-posts"> Clustering Hacker News Posts</a>!
In this chapter, you learned how to cluster a dataset of sentences, and you observed that each cluster corresponds to a particular topic.

### Original Source

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ updatedAt: "Fri Apr 19 2024 02:37:50 GMT+0000 (Coordinated Universal Time)"
---
This chapter uses the same [notebook](https://github.com/cohere-ai/notebooks/blob/main/notebooks/llmu/Introduction_Text_Embeddings.ipynb) as the previous chapter.

_Note: This chapter covers the basics of semantic search. If you want to explore this topic further, we have a dedicated [LLMU module on semantic search](/docs/intro-semantic-search)._
_Note: This chapter covers the basics of semantic search. If you want to explore this topic further, we have a dedicated [LLMU module on semantic search](https://cohere.com/llmu/what-is-semantic-search)._

We deal with unstructured text data on a regular basis, and one of the common needs is to search for information from a vast repository. A common approach is keyword-matching, but the problem with this is that the results are limited to the exact query entered.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ updatedAt: "Wed Apr 03 2024 19:03:06 GMT+0000 (Coordinated Universal Time)"
---
### Introduction

In this chapter you'll learn how to use embeddings for search. If you'd like to dive deeper into search, please check the [Search Module](/docs/intro-semantic-search) at LLMU.
In this chapter you'll learn how to use embeddings for search. If you'd like to dive deeper into search, please check the [Search Module](https://cohere.com/llmu/what-is-semantic-search) at LLMU.

### Colab notebook

Expand Down
2 changes: 1 addition & 1 deletion fern/pages/text-embeddings/embeddings.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ calculate_similarity(soup1, london) # 0.16 - not similar!

## The `input_type` parameter

Cohere embeddings are optimized for different types of inputs. For example, when using embeddings for semantic search, the search query should be embedded by setting `input_type="search_query"` whereas the text passages that are being searched over should be embedded with `input_type="search_document"`. You can find more details and a code snippet in the [Semantic Search guide](/docs/semantic-search). Similarly, the input type can be set to `classification` ([example](/docs/text-classification-with-embed)) and `clustering` to optimize the embeddings for those use cases.
Cohere embeddings are optimized for different types of inputs. For example, when using embeddings for semantic search, the search query should be embedded by setting `input_type="search_query"` whereas the text passages that are being searched over should be embedded with `input_type="search_document"`. You can find more details and a code snippet in the [Semantic Search guide](/docs/semantic-search). Similarly, the input type can be set to `classification` ([example](/page/text-classification-using-embeddings)) and `clustering` to optimize the embeddings for those use cases.

## Multilingual Support

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -162,7 +162,7 @@ Not only will we discover that the Backstreet Boys were the more popular band, b

### Connectors

As an alternative to manually implementing the 3 step workflow, the Chat API offers a 1-line solution for RAG using [Connectors](/docs/connectors). In the example below, specifying the `web-search` connector will generate search queries, use them to conduct an internet search and use the results to inform the model and produce an answer.
As an alternative to manually implementing the 3 step workflow, the Chat API offers a 1-line solution for RAG using [Connectors](/docs/overview-rag-connectors). In the example below, specifying the `web-search` connector will generate search queries, use them to conduct an internet search and use the results to inform the model and produce an answer.

**Request**

Expand Down
2 changes: 1 addition & 1 deletion fern/pages/tutorials/cookbooks.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,6 @@ Here are some of the ones we think are most exciting!
- [A Data Analyst Agent Built with Cohere and Langchain](/page/data-analyst-agent) - Build a data analyst agent with Python and Cohere's Command R+ mode and Langchain.
- [Creating a QA Bot From Technical Documentation](/page/creating-a-qa-bot) - Create a chatbot that answers user questions based on technical documentation using Cohere embeddings and LlamaIndex.
- [Multilingual Search with Cohere and Langchain](/page/multilingual-search) - Perform searches across a corpus of mixed-language documents with Cohere and Langchain.
- [Using Redis with Cohere](/page/redis-guide) - Learn how to use Cohere's text vectorizer with Redis to create a semantic search index.
- [Using Redis with Cohere](/docs/redis-and-cohere#building-a-retrieval-pipeline-with-cohere-and-redis) - Learn how to use Cohere's text vectorizer with Redis to create a semantic search index.
- [Wikipedia Semantic Search with Cohere + Weaviate](/page/wikipedia-search-with-weaviate) - Search 10 million Wikipedia vectors with Cohere's multilingual model and Weaviate's public dataset.
- [Long Form General Strategies](/page/long-form-general-strategies) - Techniques to address lengthy documents exceeding the context window of LLMs.

0 comments on commit e2a4d62

Please sign in to comment.