>")
@@ -47,7 +47,7 @@ print(response)
npm i -s cohere-ai
```
-```typescript
+```typescript TYPESCRIPT
const { CohereClient } = require("cohere-ai");
const cohere = new CohereClient({
@@ -103,7 +103,7 @@ public class ChatPost {
go get github.com/cohere-ai/cohere-go/v2
```
-```go
+```go GO
package main
import (
diff --git a/fern/pages/cohere-api/errors.mdx b/fern/pages/cohere-api/errors.mdx
index d18b8aa5..7c9fbe88 100644
--- a/fern/pages/cohere-api/errors.mdx
+++ b/fern/pages/cohere-api/errors.mdx
@@ -4,12 +4,13 @@ slug: "reference/errors"
hidden: false
description: "This document explains how Cohere uses HTTP response codes to indicate the success or failure of an API request, with codes in the 2xx range indicating success, 4xx range indicating errors due to provided information, and 5xx range indicating server errors. It also provides code examples for error handling in different programming languages."
-image: "../../assets/images/72cf813-meta_docs_image_cohere.jpg"
+image: "../../assets/images/72cf813-meta_docs_image_cohere.jpg"
keywords: "RAG, retrieval, augmented, generation, LLM, connectors, connector, langchain"
createdAt: "Tue Sep 06 2022 17:39:07 GMT+0000 (Coordinated Universal Time)"
updatedAt: "Fri Mar 15 2024 04:29:33 GMT+0000 (Coordinated Universal Time)"
---
+
Cohere uses conventional HTTP response codes to indicate the success or failure of an API request. In general:
- Codes in the **2xx** range indicate success.
@@ -37,15 +38,17 @@ Here are code examples for how error handling might look in our SDKs:
print(e.http_status)
print(e.headers)
```
+
```javascript JAVASCRIPT
(async () => {
- const response = await cohere.generate({model: 'invalid-model'});
+ const response = await cohere.generate({ model: "invalid-model" });
if (response.statusCode !== 200) {
- console.log(response.body.message);
+ console.log(response.body.message);
}
})();
```
+
```go GO
_, err := co.Generate(generateOptions)
if err != nil {
@@ -53,4 +56,5 @@ if err != nil {
return
}
```
+
diff --git a/fern/pages/cookbooks/agent-api-calls.mdx b/fern/pages/cookbooks/agent-api-calls.mdx
index e51ee61d..5c52585a 100644
--- a/fern/pages/cookbooks/agent-api-calls.mdx
+++ b/fern/pages/cookbooks/agent-api-calls.mdx
@@ -165,7 +165,7 @@ response_1 = agent_executor.invoke(
)
```
-````txt title="Output"
+````txt OUTPUT
[1m> Entering new AgentExecutor chain...[0m
[32;1m[1;3m
I will use the regex_extractor tool to extract the codes from the user query.
@@ -202,7 +202,7 @@ The output of the tool is then used to generate the request.
convert_to_json(response_1['output'])
```
-```python title="Output"
+```python OUTPUT
[{'urn': ['urn:75f2b737-06dd-4399-9206-a6c11b65138e'],
'objref': ['GLCMS004AGTCAMIS', 'GLCHL000CGUCHALE'],
'nmgs': ['0000234GLCMS0100ANORAKCAA']}]
@@ -221,7 +221,7 @@ response_2 = agent_executor.invoke(
)
```
-````txt title="Output"
+````txt OUTPUT
[1m> Entering new AgentExecutor chain...[0m
[32;1m[1;3m
I will use the regex_extractor tool to extract the relevant information from the user request.
@@ -256,7 +256,7 @@ The Agent runs the tool to check if any target string was in the query, then it
convert_to_json(response_2['output'])
```
-```python title="Output"
+```python OUTPUT
[{'taxonomies': ['SPORT']}]
```
@@ -273,7 +273,7 @@ response_3 = agent_executor.invoke(
)
```
-````txt title="Output"
+````txt OUTPUT
[1m> Entering new AgentExecutor chain...[0m
[32;1m[1;3m
I will use the regex_extractor tool to extract the codes from the user query. Then, I will create a JSON for each of the key-value pairs in the dictionary.
@@ -310,7 +310,7 @@ Grounded answer: ```json JSON
convert_to_json(response_3['output'])
```
-```python title="Output"
+```python OUTPUT
[{'objref': ['GLBRL0000GACHALE'], 'nmgs': ['0000234GLCZD0000GUREDTOAA']},
{'taxonomies': ['CARS']}]
```
diff --git a/fern/pages/cookbooks/agent-short-term-memory.mdx b/fern/pages/cookbooks/agent-short-term-memory.mdx
index 004c32f8..41954526 100644
--- a/fern/pages/cookbooks/agent-short-term-memory.mdx
+++ b/fern/pages/cookbooks/agent-short-term-memory.mdx
@@ -117,7 +117,7 @@ a1=agent_executor.invoke({
})
```
-```txt title="Output"
+```txt OUTPUT
[1m> Entering new AgentExecutor chain...[0m
[32;1m[1;3m
I will use python to read the CSV file and extract the column names.
@@ -147,7 +147,7 @@ a2_no_mem = agent_executor.invoke({
})
```
-````txt title="Output"
+````txt OUTPUT
[1m> Entering new AgentExecutor chain...[0m
[32;1m[1;3mPlan: I will ask the user for clarification on what data they would like to visualise.
Action: ```json JSON
@@ -182,7 +182,7 @@ a2_mem_ai = agent_executor.invoke({
})
```
-```txt title="Output"
+```txt OUTPUT
[1m> Entering new AgentExecutor chain...[0m
[32;1m[1;3m
I will copy and paste the code from the previous conversation and add code to plot the revenue numbers.
@@ -214,7 +214,7 @@ a2_mem_ai_hum = agent_executor.invoke({
})
```
-```txt title="Output"
+```txt OUTPUT
[1m> Entering new AgentExecutor chain...[0m
[32;1m[1;3m
I will copy and paste the code from the previous conversation into this one, and then use it to plot the revenue numbers.
@@ -247,7 +247,7 @@ a3_mem_ai_hum = agent_executor.invoke({
})
```
-```txt title="Output"
+```txt OUTPUT
[1m> Entering new AgentExecutor chain...[0m
[32;1m[1;3m
I will copy and paste the previous code and make the changes requested by the user. Then I will execute the code to plot the graph with the changes applied.
@@ -312,7 +312,7 @@ Below, an example of the augmented memory object generated by the model. You can
print(augmented_mem_obj_a2)
```
-```txt title="Output"
+```txt OUTPUT
This is the sequence of tools you used in the previous turn:
START TOOL 0 NAME:
@@ -340,7 +340,7 @@ a3_mem_ai_hum_amo = agent_executor.invoke({
```
-```txt title="Output"
+```txt OUTPUT
[1m> Entering new AgentExecutor chain...[0m
[32;1m[1;3m
I will copy and paste the previous code, and modify the y axis limits as requested.
diff --git a/fern/pages/cookbooks/agentic-multi-stage-rag.mdx b/fern/pages/cookbooks/agentic-multi-stage-rag.mdx
index bdc00166..f9318a75 100644
--- a/fern/pages/cookbooks/agentic-multi-stage-rag.mdx
+++ b/fern/pages/cookbooks/agentic-multi-stage-rag.mdx
@@ -55,7 +55,7 @@ from sklearn.metrics.pairwise import cosine_similarity
print('cohere version:', cohere.__version__)
```
-```txt title="Output"
+```txt OUTPUT
cohere version: 5.5.1
```
@@ -339,7 +339,7 @@ output = simple_rag(question1, db)
print(output)
```
-```txt title="Output"
+```txt OUTPUT
top_matched_document 1 Title: Bicycle helmet requirement\nBody: Curre...
Name: combined, dtype: object
There is currently no state law requiring the use of helmets when riding a bicycle. However, some cities and counties do require helmet use.
@@ -356,7 +356,7 @@ Use the provided documents to answer questions about an employee's specific situ
output = cohere_agent(question1, preamble, tools, verbose=True)
```
-```txt title="Output"
+```txt OUTPUT
running 0th step.
I will search for 'state level law for wearing helmets' in the documents provided and write an answer based on what I find.
@@ -388,7 +388,7 @@ output = simple_rag(question2, db)
print(output)
```
-```txt title="Output"
+```txt OUTPUT
top_matched_document 1 Title: Bicycle helmet requirement\nBody: Curre...
Name: combined, dtype: object
In the state of Washington, there is no law requiring you to wear a helmet when riding a bike. However, some cities and counties do require helmet use, so it is worth checking your local laws.
@@ -407,7 +407,7 @@ Use the provided documents to answer questions about an employee's specific situ
output = cohere_agent(question2, preamble, tools, verbose=True)
```
-```txt title="Output"
+```txt OUTPUT
running 0th step.
I will search for 'helmet with a bike' and then write an answer.
@@ -505,7 +505,7 @@ Please follow these steps:
output = cohere_agent(question2, preamble2, tools, verbose=True)
```
-```txt title="Output"
+```txt OUTPUT
running 0th step.
I will search for 'Orting' and 'bike helmet' to find the relevant information.
diff --git a/fern/pages/cookbooks/agentic-rag-mixed-data.mdx b/fern/pages/cookbooks/agentic-rag-mixed-data.mdx
index 7d8f3efc..345989ba 100644
--- a/fern/pages/cookbooks/agentic-rag-mixed-data.mdx
+++ b/fern/pages/cookbooks/agentic-rag-mixed-data.mdx
@@ -136,7 +136,7 @@ text_elements = [e for e in categorized_elements if e.type == "text"]
print(len(text_elements))
```
-```txt title="Output"
+```txt OUTPUT
14
24
```
@@ -292,7 +292,7 @@ print(final_answer_docs)
chat_history=[{'role':"USER", 'message':query},{'role':"CHATBOT", 'message':f'The final answer is: {final_answer}.' + final_answer_docs}]
```
-```txt title="Output"
+```txt OUTPUT
The charges for services in 2022 were $5,266 million.
The final answer is from the documents below:
@@ -327,7 +327,7 @@ else:
print(response.text)
```
-```txt title="Output"
+```txt OUTPUT
RAG is not needed
Final answer:
The result of dividing the charges for services in 2022 by two is $2,633.
@@ -555,7 +555,7 @@ rag_object=RAG_pipeline(paths=["city_ny_popular_fin_report.pdf"])
This function will be deprecated in a future release and `unstructured` will simply use the DEFAULT_MODEL from `unstructured_inference.model.base` to set default model name
-```txt title="Output"
+```txt OUTPUT
PDFs parsed
14
24
@@ -663,7 +663,7 @@ agent_object=react_agent(rag_retriever=rag_object)
step1_response=agent_object.run_agent("what are the charges for services in 2022 and 2023")
```
-```txt title="Output"
+```txt OUTPUT
[1m> Entering new AgentExecutor chain...[0m
[32;1m[1;3m
I will search for the charges for services in 2022 and 2023.
@@ -695,7 +695,7 @@ AIMessage(content=step1_response['output'])
agent_object.run_agent("what is the mean of the two values",history=chat_history)
```
-```txt title="Output"
+```txt OUTPUT
[1m> Entering new AgentExecutor chain...[0m
@@ -714,7 +714,7 @@ Grounded answer: The mean of the two values is 5518 million.[0m
[1m> Finished chain.[0m
```
-````python title="Output"
+````python OUTPUT
{'input': 'what is the mean of the two values',
'preamble': "\n## Task & Context\nYou help people answer their questions and other requests interactively. You will be asked a very wide array of requests on all kinds of topics. You will be equipped with a wide range of search engines or similar tools to help you, which you use to research your answer. You should focus on serving the user's needs as best you can, which will be wide-ranging.\n\n## Style Guide\nUnless the user asks for a different style of answer, you should answer in full sentences, using proper grammar and spelling.\n\n## Guidelines\nYou are an expert who answers the user's question. \nYou have access to a vectorsearch tool that will use your query to search through documents and find the relevant answer.\nYou also have access to a python interpreter tool which you can use to run code for mathematical operations.\n",
'chat_history': [HumanMessage(content='what are the charges for services in 2022 and 2023'),
diff --git a/fern/pages/cookbooks/analysis-of-financial-forms.mdx b/fern/pages/cookbooks/analysis-of-financial-forms.mdx
index ca6a7a89..06b21662 100644
--- a/fern/pages/cookbooks/analysis-of-financial-forms.mdx
+++ b/fern/pages/cookbooks/analysis-of-financial-forms.mdx
@@ -49,7 +49,7 @@ co = cohere.Client(COHERE_API_KEY)
```
-```txt title="Output"
+```txt OUTPUT
Enter your Cohere API key: ··········
```
@@ -75,7 +75,7 @@ edgar_10k = documents[0].page_content
nodes = [TextNode(text=document.page_content, id_=f"doc_{i}") for i, document in enumerate(documents)]
```
-```txt title="Output"
+```txt OUTPUT
[nltk_data] Downloading package averaged_perceptron_tagger to
[nltk_data] /root/nltk_data...
[nltk_data] Unzipping taggers/averaged_perceptron_tagger.zip.
@@ -110,7 +110,7 @@ pipeline = IngestionPipeline(
nodes = pipeline.run(nodes=nodes)
```
-```txt title="Output"
+```txt OUTPUT
/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_token.py:88: UserWarning:
The secret `HF_TOKEN` does not exist in your Colab secrets.
To authenticate with the Hugging Face Hub, create a token in your settings tab (https://huggingface.co/settings/tokens), set it as secret in your Google Colab and restart your session.
@@ -245,7 +245,7 @@ response = co.chat(
print(response.text)
```
-```txt title="Output"
+```txt OUTPUT
Here are the overall revenue numbers for the years 2021, 2022, and 2023 as bullet points:
- 2021: $5,992 million
- 2022: $8,399 million
@@ -287,7 +287,7 @@ def insert_citations(text: str, citations: list[dict]):
print(insert_citations(response.text, response.citations))
```
-```txt title="Output"
+```txt OUTPUT
Here are the overall revenue numbers for the years 2021, 2022, and 2023 as bullet points:
- 2021: $5,992 million [13]
- 2022: $8,399 million [13]
@@ -391,7 +391,7 @@ full_context_price = get_price(r2)
print(f"RAG is {(full_context_price - rag_price) / full_context_price:.0%} cheaper than full context")
```
-```txt title="Output"
+```txt OUTPUT
RAG is 93% cheaper than full context
```
@@ -399,7 +399,7 @@ RAG is 93% cheaper than full context
%timeit get_response(PROMPT, rag=True)
```
-```txt title="Output"
+```txt OUTPUT
14.9 s ± 1.4 s per loop (mean ± std. dev. of 7 runs, 1 loop each)
```
@@ -407,6 +407,6 @@ RAG is 93% cheaper than full context
%timeit get_response(full_context_prompt, rag=False)
```
-```txt title="Output"
+```txt OUTPUT
22.7 s ± 7.43 s per loop (mean ± std. dev. of 7 runs, 1 loop each)
```
diff --git a/fern/pages/cookbooks/article-recommender-with-text-embeddings.mdx b/fern/pages/cookbooks/article-recommender-with-text-embeddings.mdx
index 1d84a394..bc31d459 100644
--- a/fern/pages/cookbooks/article-recommender-with-text-embeddings.mdx
+++ b/fern/pages/cookbooks/article-recommender-with-text-embeddings.mdx
@@ -38,7 +38,7 @@ We will implement the following steps:
! pip install cohere
```
-```txt title="Output"
+```txt OUTPUT
Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/
Collecting cohere
Downloading cohere-1.3.10-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (18.0 MB)
@@ -610,7 +610,7 @@ similarity = get_similarity(reading,embeds)
get_recommendations(READING_IDX,similarity,SHOW_TOP)
```
-```txt title="Output"
+```txt OUTPUT
------ You are reading... ------
[ID 71] Article: camera phones are must-haves four times more mobiles with cameras in them will be sold in europe by the end of 2004 than last year says a report from analysts gartner. globally the number sold will reach 159 million an increase of 104%. the report predicts that nearly 70% of all mobile phones ...
diff --git a/fern/pages/cookbooks/basic-multi-step.mdx b/fern/pages/cookbooks/basic-multi-step.mdx
index 90a97db3..79089dc6 100644
--- a/fern/pages/cookbooks/basic-multi-step.mdx
+++ b/fern/pages/cookbooks/basic-multi-step.mdx
@@ -20,7 +20,7 @@ The recommended way to achieve [multi-step tool use with Cohere](https://docs.co
! pip install --quiet langchain langchain_cohere langchain_experimental
```
-```txt title="Output"
+```txt OUTPUT
[2K [90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━[0m [32m812.8/812.8 kB[0m [31m6.5 MB/s[0m eta [36m0:00:00[0m
[2K [90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━[0m [32m194.1/194.1 kB[0m [31m4.7 MB/s[0m eta [36m0:00:00[0m
[2K [90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━[0m [32m1.9/1.9 MB[0m [31m9.6 MB/s[0m eta [36m0:00:00[0m
@@ -81,7 +81,7 @@ You can easily equip your agent with a vector store!
!pip --quiet install faiss-cpu tiktoken
```
-```txt title="Output"
+```txt OUTPUT
[2K [90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━[0m [32m27.0/27.0 MB[0m [31m41.4 MB/s[0m eta [36m0:00:00[0m
[2K [90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━[0m [32m1.8/1.8 MB[0m [31m58.3 MB/s[0m eta [36m0:00:00[0m
[?25h
@@ -218,7 +218,7 @@ response['output']
```
-```txt title="Output"
+```txt OUTPUT
[1m> Entering new AgentExecutor chain...[0m
[32;1m[1;3m
I will search for tips on writing an essay and fun facts about the Roman Empire.
@@ -407,7 +407,7 @@ response['output']
```
-```txt title="Output"
+```txt OUTPUT
[1m> Entering new AgentExecutor chain...[0m
[32;1m[1;3m
First, I will calculate the result of the random operation between 10 and 20. Then, I will search for fun facts about that number and its prime factors.
@@ -457,7 +457,7 @@ response['output']
```
-```txt title="Output"
+```txt OUTPUT
[1m> Entering new AgentExecutor chain...[0m
[32;1m[1;3m
I will respond to the user's greeting.
@@ -489,7 +489,7 @@ response = agent_executor.invoke({
response['output']
```
-```txt title="Output"
+```txt OUTPUT
[1m> Entering new AgentExecutor chain...[0m
[32;1m[1;3m
I will search for the company that was founded as Sound of Music. Then, I will search for the year it went public. Finally, I will search for its stock price in 2000 and 2010.
@@ -547,7 +547,7 @@ response = agent_executor.invoke({
response['output']
```
-```txt title="Output"
+```txt OUTPUT
[1m> Entering new AgentExecutor chain...[0m
[32;1m[1;3m
I will search for 'Oracle CRM offering' and relay the information I find to the user.
diff --git a/fern/pages/cookbooks/basic-rag.mdx b/fern/pages/cookbooks/basic-rag.mdx
index 589b97b9..8f696e8b 100644
--- a/fern/pages/cookbooks/basic-rag.mdx
+++ b/fern/pages/cookbooks/basic-rag.mdx
@@ -30,7 +30,7 @@ In practice, you would typically do RAG on much longer text, that doesn't fit in
%pip install "cohere<5" --quiet
```
-```txt title="Output"
+```txt OUTPUT
[2K [90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━[0m [32m52.8/52.8 kB[0m [31m1.6 MB/s[0m eta [36m0:00:00[0m
[2K [90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━[0m [32m3.1/3.1 MB[0m [31m33.6 MB/s[0m eta [36m0:00:00[0m
[?25h
@@ -47,7 +47,7 @@ co = cohere.Client(API_KEY)
import wikipedia
```
-```txt title="Output"
+```txt OUTPUT
Preparing metadata (setup.py) ... [?25l[?25hdone
Building wheel for wikipedia (setup.py) ... [?25l[?25hdone
```
@@ -58,7 +58,7 @@ text = article.content
print(f"The text has roughly {len(text.split())} words.")
```
-```txt title="Output"
+```txt OUTPUT
The text has roughly 5323 words.
```
@@ -73,7 +73,7 @@ We index the document in a vector database. This requires getting the documents,
from langchain_text_splitters import RecursiveCharacterTextSplitter
```
-```txt title="Output"
+```txt OUTPUT
[2K [90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━[0m [32m256.9/256.9 kB[0m [31m6.6 MB/s[0m eta [36m0:00:00[0m
[2K [90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━[0m [32m66.6/66.6 kB[0m [31m8.2 MB/s[0m eta [36m0:00:00[0m
[2K [90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━[0m [32m138.5/138.5 kB[0m [31m14.5 MB/s[0m eta [36m0:00:00[0m
@@ -93,7 +93,7 @@ chunks = [c.page_content for c in chunks_]
print(f"The text has been broken down in {len(chunks)} chunks.")
```
-```txt title="Output"
+```txt OUTPUT
The text has been broken down in 91 chunks.
```
@@ -113,7 +113,7 @@ embeddings = response.embeddings.float
print(f"We just computed {len(embeddings)} embeddings.")
```
-```txt title="Output"
+```txt OUTPUT
We just computed 91 embeddings.
```
@@ -154,7 +154,7 @@ query_embedding = response.embeddings.float[0]
print("query_embedding: ", query_embedding)
```
-```txt title="Output"
+```txt OUTPUT
query_embedding: [-0.068603516, -0.02947998, -0.06274414, -0.015449524, -0.033294678, 0.0056877136, -0.047210693, 0.04714966, -0.024871826, 0.008148193, 0.0770874, 0.023880005, -0.058685303, -0.052520752, 0.012832642, 0.024398804, 0.0053215027, 0.035491943, 0.02961731, -0.0069847107, 0.01083374, -0.0011358261, -0.002199173, 0.018417358, 0.027389526, -0.002691269, -0.026535034, 0.015197754, 0.024368286, 0.03729248, 0.0057754517, -0.02229309, -0.014694214, 0.019989014, -0.0036315918, -0.013793945, 0.02835083, 0.006011963, 0.011428833, 0.008682251, 0.046142578, -0.040039062, -0.032196045, -0.002653122, -0.012580872, -0.0041618347, 0.03111267, -0.016799927, 0.014801025, -0.00030636787, -0.033050537, 0.033966064, -0.016021729, -0.025009155, -0.007534027, -0.017074585, 0.008415222, -0.10620117, 0.019195557, -0.015686035, -0.0043182373, -0.045440674, 0.05404663, 0.030776978, -0.014129639, -0.01499939, -0.007286072, 0.009933472, 0.06390381, 0.02444458, -0.010345459, 0.041931152, 0.032989502, -0.04522705, 0.056610107, 0.0068893433, -0.008911133, 0.012489319, 0.01675415, 0.020065308, 0.018753052, 0.022659302, -0.051849365, -0.04925537, 0.046325684, -0.005268097, 0.0026874542, -0.036712646, 0.009437561, -0.0037841797, -0.01473999, -0.034179688, -0.0011606216, 0.05026245, 0.0020771027, -0.016021729, -0.0044898987, 0.04168701, -0.015205383, 0.019210815, -0.012374878, -0.031311035, 0.03111267, -0.040100098, -0.016479492, 0.020446777, 0.010192871, 0.0037841797, -0.0023765564, 0.015220642, -0.016571045, -0.006454468, 0.037384033, -0.044555664, -0.008262634, 0.019546509, 0.009460449, 0.014701843, 0.02658081, -0.02078247, 0.015571594, 0.013153076, -0.010375977, 0.047912598, 0.005393982, -0.007911682, -0.019378662, 0.023529053, -0.0033550262, -0.04598999, -0.0052871704, 0.040252686, 0.011375427, 0.01550293, -0.004508972, 0.006515503, 0.003370285, -0.022766113, 0.00062561035, -0.0007596016, -0.0015277863, 0.0149002075, 0.061401367, 8.261204e-05, 0.06359863, -0.01537323, 0.007446289, 0.018814087, 0.02507019, 0.024215698, 0.006122589, 0.005886078, -0.03829956, 0.029037476, 0.07720947, 0.016921997, 0.022109985, 0.005958557, 0.028793335, 0.019485474, 0.015174866, 0.026153564, 0.032318115, 0.034210205, 0.027145386, -0.019515991, -0.018661499, 0.020477295, 0.008598328, -0.06573486, -0.037109375, 0.04043579, 0.030471802, -0.0010843277, 0.009757996, 0.026947021, 0.037017822, -0.018234253, -0.0115356445, 0.099365234, 0.027816772, -0.019927979, 0.0020961761, 0.013198853, -0.019073486, 2.7656555e-05, 0.041259766, 0.029510498, -0.016204834, 0.028137207, 0.039489746, 0.034698486, -0.03918457, -0.029418945, 0.02041626, 0.0073432922, -0.018569946, -0.009849548, 0.002861023, 0.030319214, -0.012886047, 0.014671326, -0.035827637, 0.007247925, -0.027709961, -0.022079468, 0.0012960434, 0.015426636, -0.01725769, 0.01525116, 0.025360107, -0.0077400208, -0.039916992, 0.029037476, -0.011154175, 0.007736206, -0.041748047, 0.05343628, 0.007286072, 0.0435791, 0.034301758, -0.047210693, 0.03552246, -0.015327454, 0.029922485, -0.018859863, 0.013053894, -0.028060913, 0.07757568, -0.020462036, 0.070739746, -0.010223389, 0.03604126, 0.02758789, -0.023284912, 0.012184143, 0.029144287, 0.023880005, -0.019378662, -0.0051116943, 0.0048675537, 0.01864624, -0.04397583, -0.007598877, 0.0713501, 0.0115737915, 0.002922058, 0.011619568, 0.017364502, 0.031921387, -0.0019664764, -0.008575439, 0.003484726, -0.09466553, 0.03475952, 0.026611328, -0.039520264, -0.0104522705, -0.005443573, -0.008392334, 0.012908936, 0.0043792725, -0.002456665, -0.028396606, -0.02027893, -0.0005569458, 0.027786255, 0.03427124, -0.0062332153, -0.018203735, 0.019241333, 0.07244873, -0.0028057098, 0.01234436, -0.0018787384, -0.027496338, 0.0015287399, -0.004032135, -0.013748169, -0.01878357, 0.0018053055, -0.01159668, 0.028213501, 0.004776001, 0.042388916, 0.0024280548, 0.017471313, -0.038085938, 0.026321411, 0.02973938, 0.06213379, 0.006401062, 0.036102295, -0.028121948, -0.00869751, -0.016693115, 0.029190063, 0.016784668, -0.008628845, 0.0039634705, -0.0035381317, 0.019500732, 0.025009155, -0.04547119, -0.003572464, 0.05215454, 0.067871094, -0.04257202, -0.02293396, -0.027175903, 0.05340576, 0.019226074, 0.039978027, 0.056121826, -0.028320312, -0.020217896, -0.035003662, 0.03225708, 0.028656006, 0.062347412, 0.12915039, -0.0137786865, 0.0022201538, -0.057434082, -0.04397583, -0.049865723, -0.013160706, -0.03353882, 0.006427765, -0.014823914, -0.008201599, -0.036346436, -0.037353516, -0.010528564, -0.015930176, -0.027572632, 0.0074272156, 0.004547119, -0.024414062, -0.018859863, -0.020095825, 0.029632568, -0.00067043304, -0.044036865, -0.0043411255, -0.005256653, -0.019195557, 0.022262573, -0.00020956993, -0.013877869, -0.011108398, -0.020324707, -0.015808105, -0.025039673, -0.009498596, 0.05090332, 0.0046195984, -0.017150879, 0.04309082, -0.029067993, 0.002670288, -0.00026249886, -0.032409668, -0.053100586, 0.012481689, -0.014633179, 0.0013475418, -0.034332275, 0.038330078, 0.014892578, -0.046936035, 0.021591187, -0.020385742, -0.0052604675, 0.02796936, 0.0014333725, 0.012077332, -0.0118255615, -0.005569458, 0.008491516, 0.009841919, 0.0031318665, -0.003408432, -0.007144928, 0.040374756, -0.0038928986, 0.005279541, -0.008415222, 0.031707764, 0.0140686035, -0.015029907, -0.02810669, -0.0078125, -0.030853271, -0.03201294, 0.021316528, -0.036193848, -0.0423584, 0.0072784424, 0.014801025, 0.0019607544, -0.012367249, -0.009056091, -0.021438599, -0.02645874, 0.038726807, -0.007549286, 0.0049591064, 0.019012451, 0.017791748, -0.009185791, 0.04006958, 0.003107071, -0.0075302124, -0.010375977, -0.009246826, -0.02130127, -0.0056762695, -0.0076789856, 0.010009766, -0.010536194, 0.041107178, 0.0021133423, 0.029891968, 0.01626587, 0.042236328, -0.02784729, -0.032836914, 0.0317688, 0.045715332, 0.000116825104, 0.028030396, 0.007205963, 0.012512207, -0.035583496, -0.048034668, -0.023529053, -0.04953003, 0.0345459, -0.048339844, -0.060272217, -0.004512787, 0.04425049, 0.0076141357, 0.029510498, 0.007396698, 0.003353119, -0.038726807, 0.07183838, -0.026901245, -0.023529053, -0.038085938, 0.068725586, 0.018096924, -0.013534546, 0.05883789, -0.016113281, 0.017944336, 0.041046143, 0.022918701, 0.036499023, 0.015296936, -0.04916382, 0.0075683594, -0.011390686, 0.009735107, -0.0070152283, 0.003129959, -0.032562256, 0.0003478527, -0.0036640167, -0.006893158, -0.016098022, -0.034332275, 0.037750244, -0.010269165, 0.016494751, -0.02394104, 0.03753662, -0.022644043, -0.0008234978, 0.001001358, -0.048217773, 0.04989624, 0.0078125, 0.0044937134, 0.027038574, 0.04736328, -0.02973938, -0.011726379, 0.01348114, 0.021408081, 0.00844574, -0.03741455, -0.015686035, -0.040893555, 0.001452446, -0.025405884, 0.07348633, 0.038238525, -0.019958496, 0.023071289, -0.016403198, -0.08105469, 0.0071029663, -0.019088745, 5.8174133e-05, -0.005569458, 0.01399231, 0.02255249, 0.011222839, 0.00028824806, 0.0066184998, 0.0017499924, -0.009864807, -0.0115737915, 0.053100586, 0.0065231323, 0.001865387, -0.026428223, 0.03692627, 0.025390625, 0.022613525, 0.018722534, 0.007675171, -0.03439331, 0.041625977, -0.01789856, -0.041046143, 0.0051460266, 0.04144287, 0.048553467, 0.054595947, -0.01108551, -0.033935547, -0.026275635, -0.0118255615, -0.021362305, -0.009841919, -0.00724411, 0.028900146, 0.009887695, -0.023803711, 0.016311646, 0.018798828, -0.03668213, 0.046844482, 0.010696411, -0.014717102, -0.008110046, -0.004589081, -0.0028076172, -0.050811768, -0.017196655, -0.03491211, 0.0074005127, -0.038909912, 0.032440186, -0.034362793, -0.008682251, 0.032928467, -0.04626465, -0.009666443, 0.018951416, 0.031951904, -0.003791809, 0.02015686, -0.05532837, -0.005683899, -0.00054216385, -0.0034332275, 0.008659363, 0.02130127, -0.038879395, -0.0033397675, -0.03866577, -0.0049934387, 0.017944336, 0.001496315, 0.019485474, -0.004348755, 0.00046491623, 0.0007157326, 0.035614014, -0.027694702, 0.03692627, -0.008491516, 0.0524292, -0.016662598, -0.0017795563, -0.021575928, -0.018753052, -0.049346924, -0.06652832, 0.04272461, 0.03186035, 0.0011978149, 0.03463745, 0.024002075, 0.02607727, 0.020446777, 0.0256958, 0.026855469, 0.0074005127, -0.067993164, 0.017944336, -0.0039482117, 0.05496216, -0.041412354, 0.014175415, 0.02444458, -0.026412964, 0.057403564, -0.026779175, 0.023254395, 0.03945923, 0.033569336, -0.030258179, -0.039093018, -0.036468506, 0.017105103, 0.009635925, 0.025497437, 0.04156494, -0.02571106, -0.0010414124, -0.005630493, -0.016448975, -0.026733398, 0.001326561, -0.042022705, 0.0012521744, -0.041259766, -0.12182617, -0.03857422, 0.12548828, -0.005947113, -0.020736694, -0.0033855438, 0.03778076, -0.033813477, 0.038970947, 0.003921509, 0.011810303, 0.031982422, -0.032562256, -0.002653122, -0.025009155, -0.03805542, -0.016998291, 0.018173218, 0.0158844, 0.0011739731, 0.048217773, -0.020401001, 0.044708252, -0.017318726, 0.014457703, -0.041809082, 0.010543823, 0.041931152, 0.076293945, -0.054779053, 0.060272217, -0.046936035, 0.02949524, 0.00554657, 0.041534424, -0.013046265, -0.056152344, 0.010406494, 0.02973938, -0.023727417, -0.022476196, -0.024734497, -0.013168335, 0.060424805, 0.011787415, 0.018997192, -0.043426514, -0.00077724457, -0.010154724, 0.017150879, -0.01171875, -0.022476196, 0.0034255981, -0.0026454926, 0.004837036, -0.0043296814, 0.02619934, -0.021560669, -0.039733887, -0.022415161, -0.06817627, -0.023223877, -0.018585205, -0.015319824, 0.012588501, 0.0064353943, -0.013748169, 0.043304443, 0.002626419, -0.029373169, -0.016784668, -0.026184082, 0.05847168, 0.034179688, 0.03842163, -0.05493164, -0.017486572, 0.016540527, 0.03164673, 0.089904785, 0.013534546, -0.07684326, -0.024108887, 0.07434082, 0.030395508, 0.007091522, 0.07373047, 0.012527466, -0.010856628, -0.01828003, -0.045196533, 0.00065279007, -0.0637207, 0.010726929, 0.023880005, -0.0030708313, -0.012298584, 0.027236938, -0.04928589, 0.023071289, 0.008674622, -0.023529053, -0.015838623, -0.010543823, 0.012168884, 0.014854431, -0.05834961, -0.06088257, -0.012313843, 0.035461426, 0.02027893, 0.019348145, -0.014602661, -0.02104187, -0.0309906, 0.001405716, -0.019973755, -0.00157547, -0.003944397, 0.0009326935, -0.02078247, -0.015731812, -0.044433594, 0.03390503, 0.057159424, 0.018585205, -0.023895264, -0.0057029724, 0.0049552917, 0.013412476, 0.022399902, 0.010154724, 0.0519104, 0.06591797, 0.018341064, 0.012161255, -0.05810547, -0.043304443, -0.031173706, 0.0023860931, -0.003944397, 0.11425781, -0.031036377, 0.019989014, -0.038635254, -0.025939941, 0.035064697, 0.041168213, 0.03161621, -0.069885254, -0.04537964, 0.028945923, -0.023162842, 0.019226074, -0.028442383, 0.015594482, -0.019256592, -0.0046463013, 0.034240723, 0.009124756, 0.05718994, 0.031219482, 0.02154541, 0.009590149, 0.00076818466, 0.04849243, -0.029129028, -0.03375244, -0.023391724, -0.028381348, -0.029708862, -0.0132369995, 0.010353088, 0.020263672, -0.030807495, 0.01007843, -0.03704834, 0.023376465, -0.03665161, 0.03741455, 0.015144348, 0.057281494, 0.03137207, 0.048431396, 0.021194458, 0.008110046, -0.03540039, -0.015312195, 0.022384644, 0.0065956116, 0.008056641, 0.0018348694, -0.009246826, 0.030380249, 0.0003862381, 0.0051841736, 0.04486084, 0.017807007, 0.0026130676, 0.07977295, 0.05419922, 0.062194824, 0.02633667, 0.024841309, -0.041625977, -0.005897522, 0.04031372, -0.055908203, 0.0026226044, -0.05340576, -0.05496216, 0.011474609, -0.006954193, -0.013122559, 0.019714355, -0.07159424, 0.031173706, 0.0034255981, -0.0034103394, 0.0440979, 0.011779785, -0.007827759, -0.03173828, -0.020950317, -0.030166626, -0.035308838, 0.030792236, 0.04525757, -0.028701782, -0.011100769, -0.02331543, -0.0357666, -0.025680542, 0.0011911392, 0.01940918, 0.05706787, 0.028381348, 0.007133484, -0.07733154, -0.007686615, 0.03869629, 0.0066833496, 0.008842468, 0.03439331, -0.014282227, 0.0357666, -0.004737854, -0.039794922, -0.0070381165, 0.02670288, 0.0107421875, 0.016189575, -0.06555176, -0.0138549805, 0.0008363724, -0.016693115, 0.006904602, -0.020263672, -0.030426025, 0.008453369, -0.046173096, -0.01802063, -0.013595581, -0.0044288635, -0.0039978027, -0.0044898987, 0.0007619858, 0.003921509, 0.0053977966, 0.020385742, -0.012329102, -0.023803711, -0.0057525635, 0.038330078, -0.014549255, -0.06298828, -0.047607422, 0.039245605, -0.06781006, -0.035217285, -0.009056091, 0.019927979, -0.003932953, -0.020309448, -0.017044067, 0.018127441, -8.624792e-05, -0.043182373, 0.009590149, 0.035308838, 0.031951904, 0.0011615753, -0.042022705, 0.079956055, 0.026687622, 0.013542175, -0.0074157715, -0.00983429, -0.0022563934, 0.07373047, 0.059387207, 0.03488159, 0.0071372986, -0.06427002, -0.0546875, -0.02482605, 0.11071777, -0.021072388, 0.01626587, -0.049713135, 0.061553955, -0.016860962, 0.051971436, -0.012962341, -0.0011711121, -0.014198303, -0.0061149597, -0.005836487, 0.00022387505, -0.027618408, 0.019836426, 0.009933472, 0.02368164, -0.020309448, -0.0049591064, -0.008628845, -0.03253174, -0.017684937, 0.02468872, -0.0023498535, 0.01448822, 0.061920166, 0.031707764, -0.0026416779, -0.040985107, -0.06335449, -0.036071777, 0.05404663, -0.0044136047, -0.0146102905, -0.0033416748, 0.028671265, -0.012771606, -0.0016565323, -0.0038909912, -0.02407837, -0.009857178, 0.0014467239, -0.008720398, -0.006011963, 0.032073975, -0.033325195, 0.014862061, -0.017227173, -0.018753052, -0.0060424805, 0.022567749, -0.017654419, -0.017562866, -0.07244873, -0.0881958, 0.050476074, 0.02609253, -0.032409668, 0.07458496, 0.009399414, 0.009117126, -0.031051636, -0.03451538, -0.004219055, -0.05718994, 0.020080566, -0.025421143, -0.010948181, 0.06341553, -0.009231567, -0.021697998, -0.009719849, 0.012802124, -0.020370483, 0.0034389496, 0.018859863, -0.025680542, 0.0013141632, 0.068603516, -0.021026611, 0.021881104, -0.0395813, -0.0019073486, 0.0056037903, -0.032348633]
```
@@ -180,7 +180,7 @@ for t in top_chunks_after_retrieval:
print("== " + t)
```
-```txt title="Output"
+```txt OUTPUT
similarity scores: [0.6953257882233425, 0.3713410510180273, 0.46501499776898964, 0.5448546916785195, 0.4014738351361969, 0.3231420292334584, 0.3179003053384008, 0.42799691553367775, 0.18882594531435784, 0.36868801306504106, 0.3404040737300553, 0.3852837621219358, 0.2600249419491577, 0.3723244353775111, 0.3631492691137214, 0.47574774051439606, 0.40415422750911745, 0.4149923346201023, 0.5014741934381444, 0.3549433331883204, 0.32072714802512714, 0.14770872479410424, 0.585277816615252, 0.6999636953772764, 0.7722295084104617, 0.4895347049465806, 0.5170096485954725, 0.7137817366881455, 0.5224900699612323, 0.5914632581598285, 0.2657897083381463, 0.6462342489537262, 0.6317222315431096, 0.5982303530756702, 0.5138265091630297, 0.41385121172723643, 0.4293941094100836, 0.4173182546482015, 0.42621236706314475, 0.4428474375355954, 0.35058541576139896, 0.3578709652019502, 0.3930157841938308, 0.3564608202848675, 0.23016661533167404, 0.4933441863421645, 0.41037089239250985, 0.39993051898770193, 0.3119997063424595, 0.2677143729521374, 0.3700866951454496, 0.46727994925061545, 0.4393343280374425, 0.42111290117172434, 0.4485349189824285, 0.4710573736688592, 0.24169956903740436, 0.3840442910806355, 0.14284631817675886, 0.5381588054138154, 0.431113882725076, 0.5189547209048608, 0.3950667224233914, 0.32429768756510174, 0.4370358125161736, 0.18727062244331039, 0.5206375682478743, 0.5175737635701252, 0.5326043981628349, 0.45586923626994363, 0.21667338125532032, 0.16459878595959285, 0.22236726481673777, 0.5187259906958807, 0.2884444442338396, 0.286407544555338, 0.2313840178160818, 0.2057731158935257, 0.5973876998341746, 0.42904243401792086, 0.4081217538000544, 0.5330523063972133, 0.45080561486977405, 0.414703452285757, 0.2569028899107211, 0.5087916806929323, 0.14159076456040554, 0.46505779053352697, 0.556364222182839, 0.35464351181035236, 0.40174477023626]
Here are the indices of the top 10 chunks after retrieval: [24 27 23 0 31 32 33 78 29 22]
Here are the top 10 chunks after retrieval:
@@ -216,7 +216,7 @@ for t in top_chunks_after_rerank:
print("== " + t)
```
-```txt title="Output"
+```txt OUTPUT
Here are the top 3 chunks after rerank:
== Dune: Part Two is a 2024 American epic science fiction film directed and produced by Denis Villeneuve, who co-wrote the screenplay with Jon Spaihts. The sequel to Dune (2021) adapts the 1965 novel Dune by Frank Herbert and follows Paul Atreides as he unites with the Fremen people of the desert planet Arrakis to wage war against House Harkonnen. Timothée Chalamet, Rebecca Ferguson, Josh Brolin, Stellan Skarsgård, Dave Bautista, Zendaya, Charlotte Rampling, and Javier Bardem reprise their roles from the first
== stunt coordinator. Dune: Part Two was produced by Villeneuve, Mary Parent, and Cale Boyter, with Tanya Lapointe, Brian Herbert, Byron Merritt, Kim Herbert, Thomas Tull, Jon Spaihts, Richard P. Rubinstein, John Harrison, and Herbert W. Gain serving as executive producers and Kevin J. Anderson as creative consultant. Legendary CEO Joshua Grode confirmed in April 2019 that they plan to make a sequel, adding that "there's a logical place to stop the [first] movie before the book is over".In December 2020,
@@ -254,7 +254,7 @@ print("Final answer:")
print(response.text)
```
-```txt title="Output"
+```txt OUTPUT
Final answer:
Here are the key crew members involved in 'Dune: Part Two':
@@ -290,7 +290,7 @@ for cite in response.citations:
print(cite)
```
-```txt title="Output"
+```txt OUTPUT
Citations that support the final answer:
{'start': 63, 'end': 79, 'text': 'Denis Villeneuve', 'document_ids': ['doc_0']}
{'start': 81, 'end': 102, 'text': 'director and producer', 'document_ids': ['doc_0']}
@@ -365,7 +365,7 @@ print(insert_citations_in_order(response.text, response.citations))
```
-```markdown title="Output"
+```markdown OUTPUT
Here are the key crew members involved in 'Dune: Part Two':
- **Denis Villeneuve**[1]: **director and producer**[1]
diff --git a/fern/pages/cookbooks/calendar-agent.mdx b/fern/pages/cookbooks/calendar-agent.mdx
index a111f52c..6ece7378 100644
--- a/fern/pages/cookbooks/calendar-agent.mdx
+++ b/fern/pages/cookbooks/calendar-agent.mdx
@@ -121,7 +121,7 @@ while res.tool_calls:
print(res.text) # print the final answer
```
-```txt title="Output"
+```txt OUTPUT
I will check the user's calendar for today after 3pm and book an hour-long appointment in the first available slot.
Listing events: [{"start": "14:00", "end": "15:00"}, {"start": "15:00", "end": "16:00"}, {"start": "17:00", "end": "18:00"}]
The user has events scheduled from 2pm to 4pm and from 5pm to 6pm. I will book an hour-long appointment from 4pm to 5pm.
diff --git a/fern/pages/cookbooks/chunking-strategies.mdx b/fern/pages/cookbooks/chunking-strategies.mdx
index 62dbfdc7..19afe722 100644
--- a/fern/pages/cookbooks/chunking-strategies.mdx
+++ b/fern/pages/cookbooks/chunking-strategies.mdx
@@ -50,7 +50,7 @@ co_api_key = getpass("Enter Cohere API key: ")
co = cohere.Client(api_key=co_api_key)
```
-```txt title="Output"
+```txt OUTPUT
Enter Cohere API key: ··········
```
@@ -165,7 +165,7 @@ text = '\n\n'.join([div.get_text() for div in target_divs])
print(text[:500])
```
-```txt title="Output"
+```txt OUTPUT
Length of the script: 385
Example of processed text:
@@ -237,7 +237,7 @@ response = response
print(response.text)
```
-```txt title="Output"
+```txt OUTPUT
Number of docuemnts: 5
An unknown speaker mentions Jonathan Nolan in a conversation about the creators of Westworld. They mention that Jonathan Nolan and Lisa Joy Nolan are friends of theirs, and that they have invited them to visit the lab.
```
@@ -248,7 +248,7 @@ A notable feature of [`co.chat`](https://docs.cohere.com/reference/chat) is its
print(insert_citations(response.text, response.citations))
```
-```txt title="Output"
+```txt OUTPUT
An unknown speaker [0] mentions Jonathan Nolan in a conversation about the creators of Westworld. [0] They mention that Jonathan Nolan and Lisa Joy Nolan [0] are friends [0] of theirs, and that they have invited them to visit the lab. [0]
```
@@ -258,7 +258,7 @@ Indeed, by printing the cited chunk, we can validate that the text was divided s
print(source_nodes[0])
```
-```python title="Output"
+```python OUTPUT
{'text': "Yeah. The creators of Westworld, Jonathan Nolan, Lisa Joy Nolan, are friends -- are all friends of mine, actually. And I invited them to come see the lab and, like, well, come see it, hopefully soon. It's pretty well -- especially the sort of subsystem test stands where you've just got like one leg on a test stand just doing repetitive exercises and one arm on a test stand pretty well.\n\nYeah.\n\nUnknown speaker\n\nWe're not entering Westworld anytime soon."}
```
@@ -288,7 +288,7 @@ response = response
print(response.text)
```
-```txt title="Output"
+```txt OUTPUT
Number of docuemnts: 5
Elon Musk mentions Jonathan Nolan. Musk is the CEO and Product Architect of the lab that resembles the set of Westworld, a show created by Jonathan Nolan and Lisa Joy Nolan.
```
@@ -299,7 +299,7 @@ Again, we can print the text along with the citations.
print(insert_citations(response.text, response.citations))
```
-```txt title="Output"
+```txt OUTPUT
Elon Musk [0] mentions Jonathan Nolan. Musk is the CEO and Product Architect [0] of the lab [0] that resembles the set of Westworld [0], a show created by Jonathan Nolan [0] and Lisa Joy Nolan. [0]
```
@@ -309,7 +309,7 @@ And investigate the chunks which were used as context to answer the query.
source_nodes[0]
```
-```python title="Output"
+```python OUTPUT
{'text': "Yeah, not the best reference.\n\nElon Musk -- Chief Executive Officer and Product Architect\n\nYeah. The creators of Westworld, Jonathan Nolan, Lisa Joy Nolan, are friends -- are all friends of mine, actually. And I invited them to come see the lab and, like, well, come see it, hopefully soon. It's pretty well -- especially the sort of subsystem test stands where you've just got like one leg on a test stand just doing repetitive exercises and one arm on a test stand pretty well.\n\nYeah."}
```
@@ -347,7 +347,7 @@ text_custom = '\n'.join(text_custom)
print(text_custom[:500])
```
-```txt title="Output"
+```txt OUTPUT
HTML text
[Martin Viecha
, Good afternoon, everyone, and welcome to Tesla's fourth-quarter 2023 Q&A webcast. My name is Martin Viecha, VP of investor relations, and I'm joined today by Elon Musk, Vaibhav Taneja, and a number of other executives. Our Q4 results were announced at about 3 p.m. Central Time in the update that we published at the same link as this webcast.
, During this call, we will discuss our business outlook and make forward-looking statements. These comments are based on our predictions and expectations as of today. Actual events or results could differ materially due to a number of risks and uncertainties, including those mentioned in our most recent filings with the SEC. [Operator instructions] But before we jump into Q&A, Elon has some opening remarks.
]
-------------------
@@ -390,7 +390,7 @@ response = response
print(response.text)
```
-```txt title="Output"
+```txt OUTPUT
WARNING:langchain_text_splitters.base:Created a chunk of size 5946, which is longer than the specified 1000
WARNING:langchain_text_splitters.base:Created a chunk of size 4092, which is longer than the specified 1000
WARNING:langchain_text_splitters.base:Created a chunk of size 1782, which is longer than the specified 1000
@@ -413,7 +413,7 @@ Below we validate the answer using citations.
print(insert_citations(response.text, response.citations))
```
-```txt title="Output"
+```txt OUTPUT
Elon Musk [0] mentions Jonathan Nolan. [0] Musk is friends [0] with the creators of Westworld [0], Jonathan Nolan [0] and Lisa Joy Nolan. [0]
```
@@ -421,7 +421,7 @@ Elon Musk [0] mentions Jonathan Nolan. [0] Musk is friends [0] with the creators
source_nodes[0]
```
-```python title="Output"
+```python OUTPUT
{'text': "Elon Musk -- Chief Executive Officer and Product Architect\nYeah. The creators of Westworld, Jonathan Nolan, Lisa Joy Nolan, are friends -- are all friends of mine, actually. And I invited them to come see the lab and, like, well, come see it, hopefully soon. It's pretty well -- especially the sort of subsystem test stands where you've just got like one leg on a test stand just doing repetitive exercises and one arm on a test stand pretty well.\nYeah.\n### Unknown speaker\nWe're not entering Westworld anytime soon.\n### Elon Musk -- Chief Executive Officer and Product Architect\nRight, right. Yeah. I take -- take safety very very seriously.\n### Martin Viecha\nThank you. The next question from Norman is: How many Cybertruck orders are in the queue? And when do you anticipate to be able to fulfill existing orders?"}
```
diff --git a/fern/pages/cookbooks/creating-a-qa-bot.mdx b/fern/pages/cookbooks/creating-a-qa-bot.mdx
index 64cc0dba..b6d0b94d 100644
--- a/fern/pages/cookbooks/creating-a-qa-bot.mdx
+++ b/fern/pages/cookbooks/creating-a-qa-bot.mdx
@@ -62,7 +62,7 @@ map_id2index = {sample["id"]: index for index, sample in enumerate(data["train"]
```
-```txt title="Output"
+```txt OUTPUT
/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_token.py:88: UserWarning:
The secret `HF_TOKEN` does not exist in your Colab secrets.
To authenticate with the Hugging Face Hub, create a token in your settings tab (https://huggingface.co/settings/tokens), set it as secret in your Google Colab and restart your session.
diff --git a/fern/pages/cookbooks/csv-agent-native-api.mdx b/fern/pages/cookbooks/csv-agent-native-api.mdx
index 9e65c045..506df6a6 100644
--- a/fern/pages/cookbooks/csv-agent-native-api.mdx
+++ b/fern/pages/cookbooks/csv-agent-native-api.mdx
@@ -65,7 +65,7 @@ print('langchain_core version:', langchain_core.__version__)
print('langchain_experimental version:', langchain_experimental.__version__)
```
-```txt title="Output"
+```txt OUTPUT
cohere version: 5.5.1
langchain version: 0.2.0
langchain_core version: 0.2.0
@@ -369,7 +369,7 @@ def cohere_agent(
output = cohere_agent("can you use python to answer 1 + 1", None, tools, verbose=True)
```
-```txt title="Output"
+```txt OUTPUT
running 0th step.
I will use Python to answer this question.
@@ -428,7 +428,7 @@ for qsn,val in question_dict.items():
print('-'*50)
```
-```txt title="Output"
+```txt OUTPUT
question:q1
running 0th step.
@@ -492,7 +492,7 @@ ratio = x/y
print(f"Their ratio is: {ratio}")
```
-```txt title="Output"
+```txt OUTPUT
The largest stockholders equity value is: 134047000000.0
The smallest revenue value is: 53809000000.0
Their ratio is: 2.4911631883142227
@@ -513,7 +513,7 @@ Here is a preview of the `balance_sheet.csv` dataframe:
print(preamble)
```
-```txt title="Output"
+```txt OUTPUT
You are an expert who answers the user's question in complete sentences. You are working with two pandas dataframe in Python. Ensure your output is a string.
Here is a preview of the `income_statement.csv` dataframe:
@@ -538,7 +538,7 @@ for qsn,val in question_dict.items():
output = cohere_agent(question, preamble, tools, verbose=True)
```
-```txt title="Output"
+```txt OUTPUT
question:q1
running 0th step.
@@ -569,7 +569,7 @@ question1 = "what is the highest value of cost of goods and service?"
output = cohere_agent(question1, preamble, tools, verbose=True)
```
-```txt title="Output"
+```txt OUTPUT
running 0th step.
I will use Python to find the highest value of 'Cost of Goods and Service' in the 'income_statement.csv' file.
@@ -592,7 +592,7 @@ question1 = "what is the highest value of cost of goods and service?"
output = cohere_agent(question1, preamble, tools, verbose=True)
```
-```txt title="Output"
+```txt OUTPUT
running 0th step.
I will use Python to find the highest value of cost of goods and service.
@@ -671,7 +671,7 @@ question1 = "what is the highest value of cost of goods and service?"
output = cohere_agent(question1, preamble, tools, verbose=True)
```
-```txt title="Output"
+```txt OUTPUT
running 0th step.
I will first view the data and then write and execute Python code to find the highest value of cost of goods and service.
diff --git a/fern/pages/cookbooks/csv-agent.mdx b/fern/pages/cookbooks/csv-agent.mdx
index 7af8ade8..d90c4949 100644
--- a/fern/pages/cookbooks/csv-agent.mdx
+++ b/fern/pages/cookbooks/csv-agent.mdx
@@ -145,7 +145,7 @@ for qsn,val in question_dict.items():
print('-'*50)
```
-```txt title="Output"
+```txt OUTPUT
question:q1
@@ -239,7 +239,7 @@ for qsn,val in question_dict.items():
print('-'*50)
```
-```txt title="Output"
+```txt OUTPUT
question:q1
@@ -359,7 +359,7 @@ ratio = x/y
print(f"Their ratio is: {ratio}")
```
-```txt title="Output"
+```txt OUTPUT
The largest stockholders equity value is: 134047000000.0
The smallest revenue value is: 53809000000.0
Their ratio is: 2.4911631883142227
@@ -376,7 +376,7 @@ for qsn,val in question_dict.items():
})
```
-```txt title="Output"
+```txt OUTPUT
question:q1
diff --git a/fern/pages/cookbooks/data-analyst-agent.mdx b/fern/pages/cookbooks/data-analyst-agent.mdx
index 244e9040..f83c9df2 100644
--- a/fern/pages/cookbooks/data-analyst-agent.mdx
+++ b/fern/pages/cookbooks/data-analyst-agent.mdx
@@ -109,7 +109,7 @@ agent_executor.invoke({
})
```
-```txt title="Output"
+```txt OUTPUT
[1m> Entering new AgentExecutor chain...[0m
[32;1m[1;3m
First, I will search for the three tech companies with the highest market cap in the US in 2024. Then, I will search for the number of full-time employees at each of these companies, and plot the data using Python.
@@ -168,7 +168,7 @@ agent_executor.invoke({
})
```
-````txt title="Output"
+````txt OUTPUT
[1m> Entering new AgentExecutor chain...[0m
[32;1m[1;3mPlan: I will respond to the user's greeting.
Action: ```json JSON
diff --git a/fern/pages/cookbooks/document-parsing-for-enterprises.mdx b/fern/pages/cookbooks/document-parsing-for-enterprises.mdx
index b5edce8f..afe46d4c 100644
--- a/fern/pages/cookbooks/document-parsing-for-enterprises.mdx
+++ b/fern/pages/cookbooks/document-parsing-for-enterprises.mdx
@@ -938,7 +938,7 @@ label_ocr_pytesseract = "".join([pytesseract.image_to_string(page) for page in p
print(label_ocr_pytesseract[:200])
```
-```txt title="Output"
+```txt OUTPUT
HIGHLIGHTS OF PRESCRIBING INFORMATION
These highlights do not include all the information needed to use
@@ -1058,7 +1058,7 @@ index.add_items(document_embeddings, list(range(len(document_embeddings))))
print("Count:", index.element_count)
```
-```txt title="Output"
+```txt OUTPUT
Count: 115
```
@@ -1141,7 +1141,7 @@ for src in ["gcp", "aws", "unstructured-io", "llamaparse-text", "llamaparse-mark
print("\n")
```
-```txt title="Output"
+```txt OUTPUT
Question 1: What are the most common adverse reactions of Iwilfin?
Question 2: What is the recommended dosage of Iwilfin on body surface area between 0.5 m2 and 0.75 m2?
Question 3: I need a succinct summary of the compound name, indication, route of administration, and mechanism of action of Iwilfin.
diff --git a/fern/pages/cookbooks/embed-jobs-serverless-pinecone.mdx b/fern/pages/cookbooks/embed-jobs-serverless-pinecone.mdx
index 3814c588..e312708d 100644
--- a/fern/pages/cookbooks/embed-jobs-serverless-pinecone.mdx
+++ b/fern/pages/cookbooks/embed-jobs-serverless-pinecone.mdx
@@ -23,7 +23,7 @@ pc = Pinecone(
)
```
-```txt title="Output"
+```txt OUTPUT
/usr/local/lib/python3.10/dist-packages/pinecone/data/index.py:1: TqdmExperimentalWarning: Using `tqdm.autonotebook.tqdm` in notebook mode. Use `tqdm.tqdm` instead to force console mode (e.g. in jupyter console)
from tqdm.autonotebook import tqdm
```
@@ -43,7 +43,7 @@ ds=co.create_dataset(
print(ds.await_validation())
```
-```txt title="Output"
+```txt OUTPUT
uploading file, starting validation...
sample-file-2gwgxq was uploaded
...
@@ -73,7 +73,7 @@ job = co.create_embed_job(dataset_id=ds.id,
job.wait() # poll the server until the job is completed
```
-```txt title="Output"
+```txt OUTPUT
...
...
```
@@ -82,7 +82,7 @@ job.wait() # poll the server until the job is completed
print(job)
```
-```bash title="Output"
+```bash OUTPUT
cohere.EmbedJob {
job_id: 6d691fbe-e026-436a-826a-16e70b293e51
status: complete
@@ -150,7 +150,7 @@ for i in range(0, len(data_array), batch_size):
print(idx.describe_index_stats())
```
-```txt title="Output"
+```txt OUTPUT
{'dimension': 1024,
'index_fullness': 0.0,
'namespaces': {'': {'vector_count': 3664}},
@@ -174,7 +174,7 @@ print(np.array(xq).shape)
res = idx.query(xq, top_k=20, include_metadata=True)
```
-```txt title="Output"
+```txt OUTPUT
(1, 1024)
```
@@ -183,7 +183,7 @@ for match in res['matches']:
print(f"{match['score']:.2f}: {match['metadata']['text']}")
```
-```txt title="Output"
+```txt OUTPUT
0.48: On October 22, 2012, Microsoft announced the release of new features including co-authoring, performance improvements and touch support.
0.45: On May 2, 2019, at F8, the company announced its new vision with the tagline "the future is private". A redesign of the website and mobile app was introduced, dubbed as "FB5". The event also featured plans for improving groups, a dating platform, end-to-end encryption on its platforms, and allowing users on Messenger to communicate directly with WhatsApp and Instagram users.
0.42: On July 13, 2009, Microsoft announced at its Worldwide Partners Conference 2009 in New Orleans that Microsoft Office 2010 reached its "Technical Preview" development milestone and features of Office Web Apps were demonstrated to the public for the first time. Additionally, Microsoft announced that Office Web Apps would be made available to consumers online and free of charge, while Microsoft Software Assurance customers will have the option of running them on premises. Office 2010 beta testers were not given access to Office Web Apps at this date, and it was announced that it would be available for testers during August 2009. However, in August 2009, a Microsoft spokesperson stated that there had been a delay in the release of Office Web Apps Technical Preview and it would not be available by the end of August.
@@ -221,7 +221,7 @@ for response in rerank_response:
print(f"{response.relevance_score:.2f}: {response.document['text']}")
```
-```txt title="Output"
+```txt OUTPUT
0.99: Microsoft Office, or simply Office, is the former name of a family of client software, server software, and services developed by Microsoft. It was first announced by Bill Gates on August 1, 1988, at COMDEX in Las Vegas. Initially a marketing term for an office suite (bundled set of productivity applications), the first version of Office contained Microsoft Word, Microsoft Excel, and Microsoft PowerPoint. Over the years, Office applications have grown substantially closer with shared features such as a common spell checker, Object Linking and Embedding data integration and Visual Basic for Applications scripting language. Microsoft also positions Office as a development platform for line-of-business software under the Office Business Applications brand.
0.93: On January 21, 2015, during the "Windows 10: The Next Chapter" press event, Microsoft unveiled Office for Windows 10, Windows Runtime ports of the Android and iOS versions of the Office Mobile suite. Optimized for smartphones and tablets, they are universal apps that can run on both Windows and Windows for phones, and share similar underlying code. A simplified version of Outlook was also added to the suite. They will be bundled with Windows 10 mobile devices, and available from the Windows Store for the PC version of Windows 10. Although the preview versions were free for most editing, the release versions will require an Office 365 subscription on larger tablets (screen size larger than 10.1 inches) and desktops for editing, as with large Android tablets. Smaller tablets and phones will have most editing features for free.
0.87: In October 2022, Microsoft announced that it will phase out the Microsoft Office brand in favor of "Microsoft 365" by January 2023. The name will continue to be used for legacy product offerings.
@@ -247,7 +247,7 @@ for match in res['matches']:
print(f"{match['score']:.2f}: {match['metadata']['text']}")
```
-```txt title="Output"
+```txt OUTPUT
(1, 1024)
0.66: YouTube began as a venture capital–funded technology startup. Between November 2005 and April 2006, the company raised money from various investors, with Sequoia Capital, $11.5 million, and Artis Capital Management, $8 million, being the largest two. YouTube's early headquarters were situated above a pizzeria and a Japanese restaurant in San Mateo, California. In February 2005, the company activated codice_1. The first video was uploaded April 23, 2005. Titled "Me at the zoo", it shows co-founder Jawed Karim at the San Diego Zoo and can still be viewed on the site. In May, the company launched a public beta and by November, a Nike ad featuring Ronaldinho became the first video to reach one million total views. The site launched officially on December 15, 2005, by which time the site was receiving 8 million views a day. Clips at the time were limited to 100 megabytes, as little as 30 seconds of footage.
0.58: Karim said the inspiration for YouTube first came from the Super Bowl XXXVIII halftime show controversy when Janet Jackson's breast was briefly exposed by Justin Timberlake during the halftime show. Karim could not easily find video clips of the incident and the 2004 Indian Ocean Tsunami online, which led to the idea of a video-sharing site. Hurley and Chen said that the original idea for YouTube was a video version of an online dating service, and had been influenced by the website Hot or Not. They created posts on Craigslist asking attractive women to upload videos of themselves to YouTube in exchange for a $100 reward. Difficulty in finding enough dating videos led to a change of plans, with the site's founders deciding to accept uploads of any video.
@@ -284,7 +284,7 @@ for response in rerank_response:
print(f"{response.relevance_score:.2f}: {response.document['text']}")
```
-```txt title="Output"
+```txt OUTPUT
0.95: YouTube began as a venture capital–funded technology startup. Between November 2005 and April 2006, the company raised money from various investors, with Sequoia Capital, $11.5 million, and Artis Capital Management, $8 million, being the largest two. YouTube's early headquarters were situated above a pizzeria and a Japanese restaurant in San Mateo, California. In February 2005, the company activated codice_1. The first video was uploaded April 23, 2005. Titled "Me at the zoo", it shows co-founder Jawed Karim at the San Diego Zoo and can still be viewed on the site. In May, the company launched a public beta and by November, a Nike ad featuring Ronaldinho became the first video to reach one million total views. The site launched officially on December 15, 2005, by which time the site was receiving 8 million views a day. Clips at the time were limited to 100 megabytes, as little as 30 seconds of footage.
0.92: Karim said the inspiration for YouTube first came from the Super Bowl XXXVIII halftime show controversy when Janet Jackson's breast was briefly exposed by Justin Timberlake during the halftime show. Karim could not easily find video clips of the incident and the 2004 Indian Ocean Tsunami online, which led to the idea of a video-sharing site. Hurley and Chen said that the original idea for YouTube was a video version of an online dating service, and had been influenced by the website Hot or Not. They created posts on Craigslist asking attractive women to upload videos of themselves to YouTube in exchange for a $100 reward. Difficulty in finding enough dating videos led to a change of plans, with the site's founders deciding to accept uploads of any video.
0.91: YouTube was not the first video-sharing site on the Internet; Vimeo was launched in November 2004, though that site remained a side project of its developers from CollegeHumor at the time and did not grow much, either. The week of YouTube's launch, NBC-Universal's "Saturday Night Live" ran a skit "Lazy Sunday" by The Lonely Island. Besides helping to bolster ratings and long-term viewership for "Saturday Night Live", "Lazy Sunday"'s status as an early viral video helped establish YouTube as an important website. Unofficial uploads of the skit to YouTube drew in more than five million collective views by February 2006 before they were removed when NBCUniversal requested it two months later based on copyright concerns. Despite eventually being taken down, these duplicate uploads of the skit helped popularize YouTube's reach and led to the upload of more third-party content. The site grew rapidly; in July 2006, the company announced that more than 65,000 new videos were being uploaded every day and that the site was receiving 100 million video views per day.
diff --git a/fern/pages/cookbooks/embed-jobs.mdx b/fern/pages/cookbooks/embed-jobs.mdx
index 7c2e93cb..ca67234f 100644
--- a/fern/pages/cookbooks/embed-jobs.mdx
+++ b/fern/pages/cookbooks/embed-jobs.mdx
@@ -29,7 +29,7 @@ ds=co.create_dataset(
)
```
-```txt title="Output"
+```txt OUTPUT
uploading file, starting validation...
sample-file-hca4x0 was uploaded
...
@@ -39,7 +39,7 @@ sample-file-hca4x0 was uploaded
print(ds.await_validation())
```
-```txt title="Output"
+```txt OUTPUT
cohere.Dataset {
id: sample-file-hca4x0
name: sample_file
@@ -65,7 +65,7 @@ job = co.create_embed_job(
job.wait() # poll the server until the job is completed
```
-```txt title="Output"
+```txt OUTPUT
...
...
```
@@ -74,7 +74,7 @@ job.wait() # poll the server until the job is completed
print(job)
```
-```txt title="Output"
+```txt OUTPUT
cohere.EmbedJob {
job_id: 792bbc1a-561b-48c2-8a97-0c80c1914ea8
status: complete
@@ -154,7 +154,7 @@ for idx, r in enumerate(final_result):
print("\n")
```
-```txt title="Output"
+```txt OUTPUT
Document Rank: 1, Document Index: 0
Document: YouTube began as a venture capital–funded technology startup. Between November 2005 and April 2006, the company raised money from various investors, with Sequoia Capital, $11.5 million, and Artis Capital Management, $8 million, being the largest two. YouTube's early headquarters were situated above a pizzeria and a Japanese restaurant in San Mateo, California. In February 2005, the company activated codice_1. The first video was uploaded April 23, 2005. Titled "Me at the zoo", it shows co-founder Jawed Karim at the San Diego Zoo and can still be viewed on the site. In May, the company launched a public beta and by November, a Nike ad featuring Ronaldinho became the first video to reach one million total views. The site launched officially on December 15, 2005, by which time the site was receiving 8 million views a day. Clips at the time were limited to 100 megabytes, as little as 30 seconds of footage.
Relevance Score: 0.94815
diff --git a/fern/pages/cookbooks/hello-world-meet-ai.mdx b/fern/pages/cookbooks/hello-world-meet-ai.mdx
index bf96ebd2..d06a5725 100644
--- a/fern/pages/cookbooks/hello-world-meet-ai.mdx
+++ b/fern/pages/cookbooks/hello-world-meet-ai.mdx
@@ -69,7 +69,7 @@ int main() {
}
```
2. **Python**:
-```python PYTHON
+```python
print("Hello World")
```
3. **Java**:
diff --git a/fern/pages/cookbooks/long-form-general-strategies.mdx b/fern/pages/cookbooks/long-form-general-strategies.mdx
index 31165ae1..198c1284 100644
--- a/fern/pages/cookbooks/long-form-general-strategies.mdx
+++ b/fern/pages/cookbooks/long-form-general-strategies.mdx
@@ -68,7 +68,7 @@ from getpass import getpass
from IPython.display import HTML, display
```
-```txt title="Output"
+```txt OUTPUT
[nltk_data] Downloading package punkt to
[nltk_data] /home/anna_cohere_com/nltk_data...
[nltk_data] Package punkt is already up-to-date!
@@ -134,7 +134,7 @@ long_text = long_text.replace('\n', ' ')
print("Document length - #tokens:", len(co.tokenize(text=long_text, model=co_model).tokens))
```
-```txt title="Output"
+```txt OUTPUT
PDF saved successfully to 'example.pdf'
Document length - #tokens: 134184
```
@@ -287,7 +287,7 @@ print()
print('Example passage:', np.random.choice(np.asarray(passages), size=1, replace=False))
```
-```txt title="Output"
+```txt OUTPUT
Example sentence: ['4.']
Example passage: ['T echnical robustness and safety means that AI systems are developed and used in a way that allows robustness in case of problems and resilience against attempts to alter the use or performance of the AI system so as to allow unlawful use by third parties, a nd minimise unintended harm. Privacy and data governance means that AI systems are developed and used in compliance with existing privacy and data protection rules, while processing data that meets high standards in terms of quality and integrity. Transpar ency means that AI systems are developed and used in a way that allows appropriate traceability and explainability, while making humans aware that they communicate or interact with an AI system, as well as duly informing deployers of the capabilities and l imitations of that AI system and affected persons about their rights. Diversity, non - discrimination and fairness means that AI systems are developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural dive rsity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law. Social and environmental well - being means that AI systems are developed and used in a sustainable and environmentally friendly manner as well as in a way to benefit all human beings, while monitoring and assessing the long - term impacts on the individual, society and democracy. ']
@@ -368,7 +368,7 @@ prompt = prompt_template.format(query=query, document=short_text)
print(generate_response(message=prompt, max_tokens=300))
```
-```txt title="Output"
+```txt OUTPUT
The report discusses the restrictions on the use of biometric identification by law enforcement in publicly accessible spaces. According to the document, real-time biometric identification is prohibited unless in exceptional cases where its use is strictly necessary and proportionate to achieving a substantial public interest. The use of post-remote biometric identification systems is also mentioned, noting the requirements for authorization and limitations on its use.
The report also highlights the classification of certain AI systems as high-risk, including biometric identification systems, emotion recognition systems, and biometric categorisation systems, with the exception of systems used for biometric verification. High-risk AI systems are subject to specific requirements and obligations.
@@ -457,7 +457,7 @@ prompt = prompt_template.format(document=short_text)
print(generate_response(message=prompt, max_tokens=600))
```
-```txt title="Output"
+```txt OUTPUT
The document outlines the requirements and obligations for developing and deploying AI systems in the European Union. It aims to establish a regulatory framework to foster innovation while ensuring the protection of fundamental rights and public interests. The regulation applies to providers and deployers of AI systems, including those established outside the EU. High-risk AI systems are subject to specific requirements, such as risk management, data governance, and transparency. Providers must ensure compliance and keep records, and deployers must use AI systems responsibly. The regulation also establishes an AI Office, advisory bodies, and a database for high-risk AI systems. Additionally, it addresses issues like testing, codes of conduct, and cooperation with third countries. Fines and penalties are proposed for non-compliance.
```
diff --git a/fern/pages/cookbooks/migrating-prompts.mdx b/fern/pages/cookbooks/migrating-prompts.mdx
index 2ebcb4cd..46513f41 100644
--- a/fern/pages/cookbooks/migrating-prompts.mdx
+++ b/fern/pages/cookbooks/migrating-prompts.mdx
@@ -36,7 +36,7 @@ import getpass
CO_API_KEY = getpass.getpass('cohere API key:')
```
-```txt title="Output"
+```txt OUTPUT
cohere API key:··········
```
@@ -102,7 +102,7 @@ Return everything in a JSON object with the following structure:
print(meta_prompt)
```
-```txt title="Output"
+```txt OUTPUT
Below is a task for an LLM delimited with ## Original Task. Your task is to split that task into two parts: (1) the context; and (2) the instructions.
The context should be split into several separate parts and returned as a JSON object where each part has a name describing its contents and the value is the contents itself.
Make sure to include all of the context contained in the original task description and do not change its meaning.
@@ -143,7 +143,7 @@ upgraded_prompt = co.chat(
print(upgraded_prompt.text)
```
-````txt title="Output"
+````txt OUTPUT
Here is the task delved into a JSON object as requested:
```json JSON
{
@@ -198,7 +198,7 @@ new_prompt, docs = get_prompt_and_docs(upgraded_prompt.text)
new_prompt, docs
```
-```txt title="Output"
+```txt OUTPUT
('Using the provided information, write a concise, first-person social media biography of no more than 100 words.',
[{'title': 'Work Experience',
'snippet': 'Over 15 years of AI and machine learning engineering experience. Proficient in Python, C++, and Java, with expertise in developing algorithms for natural language processing, computer vision, and recommendation systems.'},
@@ -224,7 +224,7 @@ response = co.chat(
print(response.text)
```
-```txt title="Output"
+```txt OUTPUT
I'm a senior software engineer with a Ph.D. in Statistics and over 15 years of AI and machine learning engineering experience. My current focus at GlobalSolTech's AI R&D department is developing algorithms for natural language processing, computer vision, and recommendation systems. In my free time, I enjoy hiking, freediving, and restoring vintage watches, and I'm a married father of four with three grandchildren.
```
@@ -263,7 +263,7 @@ def insert_citations(text: str, citations: list[dict], add_one: bool=False):
print(insert_citations(response.text, response.citations, True))
```
-```txt title="Output"
+```txt OUTPUT
I'm a senior software engineer [3, 4] with a Ph.D. in Statistics [2] and over 15 years of AI and machine learning engineering experience. [1] My current focus at GlobalSolTech's AI R&D department [4] is developing algorithms for natural language processing, computer vision, and recommendation systems. [1] In my free time, I enjoy hiking, freediving, and restoring vintage watches [3], and I'm a married father of four with three grandchildren. [3]
```
@@ -282,7 +282,7 @@ tokens = co.tokenize(text=apple, model='command-r')
len(tokens.tokens)
```
-```txt title="Output"
+```txt OUTPUT
29697
```
@@ -313,7 +313,7 @@ response = co.chat(
print(response.text)
```
-```txt title="Output"
+```txt OUTPUT
The complaint alleges that Apple has violated antitrust laws by engaging in a pattern of anticompetitive conduct to maintain its monopoly power over the U.S. markets for smartphones and performance smartphones. Apple is accused of using its control over app distribution and access to its operating system to impede competition and innovation. Specifically, the company is said to have restricted developers' ability to create certain apps and limited the functionality of others, making it harder for consumers to switch away from iPhones to rival smartphones. This conduct is alleged to have harmed consumers and developers by reducing choice, increasing prices, and stifling innovation. The plaintiffs seek injunctive relief and potential monetary awards to remedy these illegal practices.
```
@@ -335,7 +335,7 @@ response = co.chat(
print(response.text)
```
-```txt title="Output"
+```txt OUTPUT
Yes, the DOJ alleges that Apple could allow iPhone users to send encrypted messages to Android users while still using iMessage on their iPhones but chooses not to do so. According to the DOJ, this would instantly improve the privacy and security of iPhones and other smartphones.
```
@@ -374,7 +374,7 @@ chunks = chunk_doc(apple)
print(chunks[18])
```
-```python title="Output"
+```python OUTPUT
{'title': '18', 'snippet': '\nProtecting competition and the innovation that competition inevitably ushers in\nfor consumers, developers, publishers, content creators, and device manufacturers is why\nPlaintiffs bring this lawsuit under Section 2 of the Sherman Act to challenge Apple’s\nmaintenance of its monopoly over smartphone markets, which affect hundreds of millions of\nAmericans every day. Plaintiffs bring this case to rid smartphone markets of Apple’s\nmonopolization and exclusionary conduct and to ensure that the next generation of innovators\ncan upend the technological world as we know it with new and transformative technologies.\n\n\nII.\n\nDefendant Apple\n\n'}
```
@@ -392,7 +392,7 @@ response = co.chat(
print(response.text)
```
-```txt title="Output"
+```txt OUTPUT
Yes, according to the DOJ, Apple could encrypt text messages sent from iPhones to Android phones. The DOJ claims that Apple degrades the security and privacy of its users by impeding cross-platform encryption and preventing developers from fixing the broken cross-platform messaging experience. Apple's conduct makes it harder to switch from iPhone to Android, as messages sent from iPhones to Android phones are unencrypted.
```
@@ -402,7 +402,7 @@ The responses seem similar, but we should add citations and check the citation t
print(insert_citations(response.text, response.citations))
```
-```txt title="Output"
+```txt OUTPUT
Yes, according to the DOJ, Apple could encrypt text messages sent from iPhones to Android phones. [144] The DOJ claims that Apple degrades the security and privacy [144] of its users by impeding cross-platform encryption [144] and preventing developers from fixing the broken cross-platform messaging experience. [93] Apple's conduct makes it harder to switch from iPhone to Android [144], as messages sent from iPhones to Android phones are unencrypted. [144]
```
@@ -412,7 +412,7 @@ The most important passage seems to be paragraph 144. Paragraph 93 is also cited
print(chunks[144]['snippet'])
```
-```txt title="Output"
+```txt OUTPUT
Apple is also willing to make the iPhone less secure and less private if that helps
maintain its monopoly power. For example, text messages sent from iPhones to Android phones
are unencrypted as a result of Apple’s conduct. If Apple wanted to, Apple could allow iPhone
@@ -424,7 +424,7 @@ which would instantly improve the privacy and security of iPhone and other smart
print(chunks[93]['snippet'])
```
-```txt title="Output"
+```txt OUTPUT
Recently, Apple blocked a third-party developer from fixing the broken cross-
platform messaging experience in Apple Messages and providing end-to-end encryption for
messages between Apple Messages and Android users. By rejecting solutions that would allow
diff --git a/fern/pages/cookbooks/multilingual-search.mdx b/fern/pages/cookbooks/multilingual-search.mdx
index a6a62ecc..5e6285a4 100644
--- a/fern/pages/cookbooks/multilingual-search.mdx
+++ b/fern/pages/cookbooks/multilingual-search.mdx
@@ -55,7 +55,7 @@ import os
dotenv.load_dotenv(".env") # Upload an '.env' file containing an environment variable named 'COHERE_API_KEY' using your Cohere API Key
```
-```txt title="Output"
+```txt OUTPUT
True
```
@@ -73,7 +73,7 @@ texts = [item['text'].decode('utf-8') for item in tfds.as_numpy(dataset)]
print(f"Number of documents: {len(texts)}")
```
-```txt title="Output"
+```txt OUTPUT
Downloading and preparing dataset 350.79 KiB (download: 350.79 KiB, generated: 636.90 KiB, total: 987.69 KiB) to /root/tensorflow_datasets/trec/1.0.0...
@@ -119,7 +119,7 @@ for item in random.sample(texts, 5):
print(item)
```
-```txt title="Output"
+```txt OUTPUT
What is the starting salary for beginning lawyers ?
Where did Bill Gates go to college ?
What task does the Bouvier breed of dog perform ?
@@ -162,7 +162,7 @@ for idx,query in enumerate(queries):
print("-"*20,"\n")
```
-```txt title="Output"
+```txt OUTPUT
Query language: English
Query: How to get in touch with Bill Gates
Most similar existing question: What is Bill Gates of Microsoft E-mail address ?
@@ -191,7 +191,7 @@ Most similar existing question: What is Bill Gates of Microsoft E-mail address ?
!wget 'https://docs.google.com/uc?export=download&id=1f1INWOfJrHTFmbyF_0be5b4u_moz3a4F' -O steve-jobs-commencement.txt
```
-```txt title="Output"
+```txt OUTPUT
--2023-06-08 06:11:19-- https://docs.google.com/uc?export=download&id=1f1INWOfJrHTFmbyF_0be5b4u_moz3a4F
Resolving docs.google.com (docs.google.com)... 74.125.200.101, 74.125.200.138, 74.125.200.102, ...
Connecting to docs.google.com (docs.google.com)|74.125.200.101|:443... connected.
@@ -275,7 +275,7 @@ for question in questions:
print(f"{idx+1}: {source_wrapped}")
```
-```txt title="Output"
+```txt OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
Question: What did the author liken The Whole Earth Catalog to?
@@ -401,7 +401,7 @@ for question in questions_fr:
print(f"Answer: {result}")
```
-```txt title="Output"
+```txt OUTPUT
--------------------
Question: À quoi se compare The Whole Earth Catalog ?
diff --git a/fern/pages/cookbooks/pdf-extractor.mdx b/fern/pages/cookbooks/pdf-extractor.mdx
index 25193576..8ecc7920 100644
--- a/fern/pages/cookbooks/pdf-extractor.mdx
+++ b/fern/pages/cookbooks/pdf-extractor.mdx
@@ -49,7 +49,7 @@ from unstructured.partition.pdf import partition_pdf
print('cohere version:', cohere.__version__)
```
-```txt title="Output"
+```txt OUTPUT
cohere version: 5.5.1
```
@@ -255,7 +255,7 @@ pdf_extractor('simple_invoice.pdf')
```
-```txt title="Output"
+```txt OUTPUT
running step 0
I will summarise the text and then use the convert_to_json tool to format the summary.
diff --git a/fern/pages/cookbooks/rag-evaluation-deep-dive.mdx b/fern/pages/cookbooks/rag-evaluation-deep-dive.mdx
index 42de0df6..bb3addae 100644
--- a/fern/pages/cookbooks/rag-evaluation-deep-dive.mdx
+++ b/fern/pages/cookbooks/rag-evaluation-deep-dive.mdx
@@ -161,7 +161,7 @@ print(f'Golden docs: {golden_docs}')
print(f'Retrieved docs: {retrieved_docs}')
```
-```txt title="Output"
+```txt OUTPUT
Query: How has Apple's total net sales changed over time?
Golden docs: ['2022 Q3 AAPL.pdf', '2023 Q1 AAPL.pdf', '2023 Q2 AAPL.pdf', '2023 Q3 AAPL.pdf']
Retrieved docs: ['2022 Q3 AAPL.pdf', '2023 Q1 MSFT.pdf', '2023 Q1 AAPL.pdf']
@@ -176,7 +176,7 @@ evaluate_retrieval.run_evals(retrieved_docs,golden_docs)
```
-```txt title="Output"
+```txt OUTPUT
precision: 0.67
recall: 0.5
map: 0.83
@@ -238,7 +238,7 @@ claims = extract_claims(query, response, model, client)
print(f"List of claims extracted from the model's response:\n\n{claims}")
```
-```txt title="Output"
+```txt OUTPUT
List of claims extracted from the model's response:
- Apple's total net sales experienced a decline over the last year.
@@ -295,7 +295,7 @@ assessed_claims_faithfulness = assess_claims(query=query,
print(f"Assessment of the claims extracted from the model's response:\n\n{assessed_claims_faithfulness}")
```
-```txt title="Output"
+```txt OUTPUT
Assessment of the claims extracted from the model's response:
- Apple's total net sales experienced a decline over the last year. SUPPORTED=1
@@ -323,7 +323,7 @@ score_faithfulness = get_final_score(assessed_claims_faithfulness)
print(f'Faithfulness: {score_faithfulness}')
```
-```txt title="Output"
+```txt OUTPUT
Faithfulness: 1.0
```
@@ -351,7 +351,7 @@ score_faithfulness_modified_claims = get_final_score(assessed_modified_claims)
print(f'Faithfulness: {score_faithfulness_modified_claims}')
```
-```txt title="Output"
+```txt OUTPUT
Assessment of the modified claims:
- Apple's total net sales experienced a decline over the last year. SUPPORTED=1
@@ -387,7 +387,7 @@ assessed_claims_correctness = assess_claims(query=query,
print(f"Assess the claims extracted from the model's response against the golden answer:\n\n{assessed_claims_correctness}")
```
-```txt title="Output"
+```txt OUTPUT
Assess the claims extracted from the model's response against the golden answer:
- Apple's total net sales experienced a decline over the last year. SUPPORTED=1
@@ -406,7 +406,7 @@ score_correctness = get_final_score(assessed_claims_correctness)
print(f'Correctness: {score_correctness}')
```
-```txt title="Output"
+```txt OUTPUT
Correctness: 0.5
```
@@ -423,7 +423,7 @@ gold_claims = extract_claims(query, golden_answer, model, client)
print(f"List of claims extracted from the gold answer:\n\n{gold_claims}")
```
-```txt title="Output"
+```txt OUTPUT
List of claims extracted from the gold answer:
- For the quarterly period ended June 25, 2022, the total net sales were $82,959 million.
@@ -448,7 +448,7 @@ assessed_claims_coverage = assess_claims(query=query,
print(f"Assess which of the gold claims is in the model's response:\n\n{assessed_claims_coverage}")
```
-```txt title="Output"
+```txt OUTPUT
Assess which of the gold claims is in the model's response:
- For the quarterly period ended June 25, 2022, the total net sales were $82,959 million. SUPPORTED=0
@@ -465,7 +465,7 @@ score_coverage = get_final_score(assessed_claims_coverage)
print(f'Coverage: {score_coverage}')
```
-```txt title="Output"
+```txt OUTPUT
Coverage: 0.33
```
diff --git a/fern/pages/cookbooks/rerank-demo.mdx b/fern/pages/cookbooks/rerank-demo.mdx
index 56c361b2..a03302c7 100644
--- a/fern/pages/cookbooks/rerank-demo.mdx
+++ b/fern/pages/cookbooks/rerank-demo.mdx
@@ -22,7 +22,7 @@ We will demonstrate the rerank endpoint in this notebook.
!pip install "cohere<5"
```
-```txt title="Output"
+```txt OUTPUT
[33mDEPRECATION: Configuring installation scheme with distutils config files is deprecated and will no longer work in the near future. If you are using a Homebrew or Linuxbrew Python, please see discussion at https://github.com/Homebrew/homebrew-core/issues/76621[0m[33m
[0mRequirement already satisfied: cohere<5 in /opt/homebrew/lib/python3.9/site-packages (4.45)
Requirement already satisfied: aiohttp<4.0,>=3.0 in /opt/homebrew/lib/python3.9/site-packages (from cohere<5) (3.8.1)
@@ -86,7 +86,7 @@ for idx, r in enumerate(results):
print("\n")
```
-```txt title="Output"
+```txt OUTPUT
Document Rank: 1, Document Index: 3
Document: Washington, D.C. (also known as simply Washington or D.C., and officially as the District of Columbia) is the capital of the United States. It is a federal district. The President of the USA and many major national government offices are in the territory. This makes it the political center of the United States of America.
Relevance Score: 1.00
@@ -112,7 +112,7 @@ We use BM25 lexical search to retrieve the top-100 passages matching the query a
!pip install -U rank_bm25
```
-```txt title="Output"
+```txt OUTPUT
[33mDEPRECATION: Configuring installation scheme with distutils config files is deprecated and will no longer work in the near future. If you are using a Homebrew or Linuxbrew Python, please see discussion at https://github.com/Homebrew/homebrew-core/issues/76621[0m[33m
[0mCollecting rank_bm25
Downloading rank_bm25-0.2.2-py3-none-any.whl.metadata (3.2 kB)
@@ -134,7 +134,7 @@ import string
from tqdm.autonotebook import tqdm
```
-```txt title="Output"
+```txt OUTPUT
/var/folders/ww/ht8qwj2s7s799qnktblg6qhm0000gp/T/ipykernel_31832/1066443236.py:7: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html
from tqdm.autonotebook import tqdm
```
@@ -143,7 +143,7 @@ from tqdm.autonotebook import tqdm
!wget http://sbert.net/datasets/simplewiki-2020-11-01.jsonl.gz
```
-```txt title="Output"
+```txt OUTPUT
--2024-04-08 14:28:00-- http://sbert.net/datasets/simplewiki-2020-11-01.jsonl.gz
Resolving sbert.net (sbert.net)... 172.64.80.1, 2606:4700:130:436c:6f75:6466:6c61:7265
Connecting to sbert.net (sbert.net)|172.64.80.1|:80... connected.
@@ -177,7 +177,7 @@ with gzip.open(wikipedia_filepath, 'rt', encoding='utf8') as fIn:
print("Passages:", len(passages))
```
-```txt title="Output"
+```txt OUTPUT
Passages: 509663
```
@@ -239,7 +239,7 @@ def search(query, top_k=3, num_candidates=100):
search(query = "What is the capital of the United States?")
```
-```txt title="Output"
+```txt OUTPUT
Input question: What is the capital of the United States?
Top-3 lexical search (BM25) hits
16.264 Capital punishment (the death penalty) has existed in the United States since before the United States was a country. As of 2017, capital punishment is legal in 30 of the 50 states. The federal government (including the United States military) also uses capital punishment.
@@ -256,7 +256,7 @@ Top-3 hits by rank-API (100 BM25 hits re-ranked)
search(query = "Number countries Europe")
```
-```txt title="Output"
+```txt OUTPUT
Input question: Number countries Europe
Top-3 lexical search (BM25) hits
16.963 ECoHR' has a number of judges. The number of judges is seven normally but at the case of dealing a great issue, the number will be 21 and the judges are equally from member countries of the Council of Europe. At present, there are forty seven member countries of the Council of Europe. Each country may have one judge in the ECoHR. But, judges work independently for the ECoHR, and not for their country.
@@ -273,7 +273,7 @@ Top-3 hits by rank-API (100 BM25 hits re-ranked)
search(query = "Elon Musk year birth")
```
-```txt title="Output"
+```txt OUTPUT
Input question: Elon Musk year birth
Top-3 lexical search (BM25) hits
22.568 Tesla, Inc. is a company based in Palo Alto, California which makes electric cars. It was started in 2003 by Martin Eberhard, Dylan Stott, and Elon Musk (who also co-founded PayPal and SpaceX and is the CEO of SpaceX). Eberhard no longer works there. Today, Elon Musk is the Chief Executive Officer (CEO). It started selling its first car, the Roadster in 2008.
@@ -290,7 +290,7 @@ Top-3 hits by rank-API (100 BM25 hits re-ranked)
search(query = "Which US president was killed?")
```
-```txt title="Output"
+```txt OUTPUT
Input question: Which US president was killed?
Top-3 lexical search (BM25) hits
11.966 He came into office when the previous president, Cyprien Ntaryamira, was killed in a plane crash. It was an assassination in which the Rwandan president Juvénal Habyarimana was also killed. Ntibantunganya left office when he was deposed by Pierre Buyoya in a military coup of 1996.
@@ -307,7 +307,7 @@ Top-3 hits by rank-API (100 BM25 hits re-ranked)
search(query="When is Chinese New Year")
```
-```txt title="Output"
+```txt OUTPUT
Input question: When is Chinese New Year
Top-3 lexical search (BM25) hits
18.606 Today in China the Gregorian calendar is used for most activities. At the same time, the Chinese calendar is still used for traditional Chinese holidays like Chinese New Year or Lunar New Year.
@@ -324,7 +324,7 @@ Top-3 hits by rank-API (100 BM25 hits re-ranked)
search(query="How many people live in Paris")
```
-```txt title="Output"
+```txt OUTPUT
Input question: How many people live in Paris
Top-3 lexical search (BM25) hits
16.277 Live à Paris (English: "Live in Paris") is a live album by Canadian singer Céline Dion.
@@ -341,7 +341,7 @@ Top-3 hits by rank-API (100 BM25 hits re-ranked)
search(query="Who is the director of The Matrix?")
```
-```txt title="Output"
+```txt OUTPUT
Input question: Who is the director of The Matrix?
Top-3 lexical search (BM25) hits
16.253 An inverse matrix is a matrix that, when multiplied by another matrix, equals the identity matrix. For example:
diff --git a/fern/pages/cookbooks/sql-agent.mdx b/fern/pages/cookbooks/sql-agent.mdx
index eb230e2e..097a9475 100644
--- a/fern/pages/cookbooks/sql-agent.mdx
+++ b/fern/pages/cookbooks/sql-agent.mdx
@@ -70,7 +70,7 @@ print('**List of pre-defined Langchain Tools**')
print([tool.name for tool in tools])
```
-```txt title="Output"
+```txt OUTPUT
**List of pre-defined Langchain Tools**
['sql_db_query', 'sql_db_schema', 'sql_db_list_tables', 'sql_db_query_checker']
```
@@ -101,7 +101,7 @@ output=agent_executor.invoke({
})
```
-```txt title="Output"
+```txt OUTPUT
[1m> Entering new AgentExecutor chain...[0m
[32;1m[1;3m
I will use the sql_db_list_tables tool to find out which tables are available.
@@ -118,7 +118,7 @@ Grounded answer: The following tables are available: Album, Entering new AgentExecutor chain...[0m
[32;1m[1;3m
I will use the sql_db_schema tool to find the first row of the Playlist and Genre tables.
@@ -195,7 +195,7 @@ Here is the first row of the Playlist table:
print(output['output'])
```
-```txt title="Output"
+```txt OUTPUT
Here is the first row of the Genre table:
| GenreId | Name |
@@ -217,7 +217,7 @@ output=agent_executor.invoke({
})
```
-```txt title="Output"
+```txt OUTPUT
[1m> Entering new AgentExecutor chain...[0m
[32;1m[1;3m
I will search for the number of invoices per country and then write an answer.
@@ -271,7 +271,7 @@ Grounded answer: The countries with the most invoices are the USA (91<
print(output['output'])
```
-```txt title="Output"
+```txt OUTPUT
The countries with the most invoices are the USA (91), Canada (56), and France (35).
```
@@ -283,7 +283,7 @@ output=agent_executor.invoke({
})
```
-```txt title="Output"
+```txt OUTPUT
[1m> Entering new AgentExecutor chain...[0m
[32;1m[1;3m
I will run a SQL query to find the customer who has spent the most money.
@@ -360,7 +360,7 @@ Grounded answer: The best customer is Helena Holý, who has spen
print(output['output'])
```
-```txt title="Output"
+```txt OUTPUT
The best customer is Helena Holý, who has spent a total of 49.62.
```
@@ -378,7 +378,7 @@ print('Table Schemas')
print(context['table_info'])
```
-```sql title="Output"
+```sql OUTPUT
**Context to pass to LLM on tables**
Table Names
Album, Artist, Customer, Employee, Genre, Invoice, InvoiceLine, MediaType, Playlist, PlaylistTrack, Track
@@ -627,7 +627,7 @@ output=agent_executor.invoke({
})
```
-```txt title="Output"
+```txt OUTPUT
[1m> Entering new AgentExecutor chain...[0m
[32;1m[1;3m
I will write a SQL query to find the customer who has spent the most money.
@@ -644,7 +644,7 @@ Grounded answer: The customer who has spent the most money is Helena H
print(output['output'])
```
-```txt title="Output"
+```txt OUTPUT
The customer who has spent the most money is Helena Holý.
```
diff --git a/fern/pages/cookbooks/summarization-evals.mdx b/fern/pages/cookbooks/summarization-evals.mdx
index b9c6c63b..cb9a2a6b 100644
--- a/fern/pages/cookbooks/summarization-evals.mdx
+++ b/fern/pages/cookbooks/summarization-evals.mdx
@@ -41,7 +41,7 @@ qmsum = load_dataset("MocktaiLEngineer/qmsum-processed", split="validation")
transcripts = [x for x in qmsum["meeting_transcript"] if x is not None]
```
-```txt title="Output"
+```txt OUTPUT
Generating train split: 0%| | 0/1095 [00:00, ? examples/s]
@@ -145,7 +145,7 @@ for obj_name, obj_text in instruction_objectives.items():
print(json.dumps(instructions[:2], indent=4))
```
-```python title="Output"
+```python OUTPUT
[
{
"instruction": "Summarize the meeting based on the transcript. In paragraph form, output your response. Use at least 10 words and at most 50 words in total.",
@@ -189,7 +189,7 @@ data["transcript_token_len"] = [len(x) for x in co.batch_tokenize(data["transcri
print(data["prompt"][0])
```
-```txt title="Output"
+```txt OUTPUT
## meeting transcript
PhD F: As opposed to the rest of us
PhD D: Well comment OK I I remind that me my first objective eh in the project is to to study difference parameters to to find a a good solution to detect eh the overlapping zone in eh speech recorded But eh tsk comment ehhh comment In that way comment I I I begin to to study and to analyze the ehn the recorded speech eh the different session to to find and to locate and to mark eh the the different overlapping zone And eh so eh I was eh I am transcribing the the first session and I I have found eh eh one thousand acoustic events eh besides the overlapping zones eh I I I mean the eh breaths eh aspiration eh eh talk eh eh clap eh comment I do not know what is the different names eh you use to to name the the pause n speech
@@ -561,7 +561,7 @@ avg_scores = data[["format_score", "length_score", "completeness_score", "correc
print(avg_scores)
```
-```txt title="Output"
+```txt OUTPUT
format_score 1.000000
length_score 0.833333
completeness_score 0.800000
diff --git a/fern/pages/cookbooks/wikipedia-search-with-weaviate.mdx b/fern/pages/cookbooks/wikipedia-search-with-weaviate.mdx
index 289f8090..7b32e0b2 100644
--- a/fern/pages/cookbooks/wikipedia-search-with-weaviate.mdx
+++ b/fern/pages/cookbooks/wikipedia-search-with-weaviate.mdx
@@ -27,7 +27,7 @@ client = weaviate.Client(
client.is_ready() #check if True
```
-```txt title="Output"
+```txt OUTPUT
True
```
@@ -104,7 +104,7 @@ query_result = semantic_serch("time travel plot twist")
print_result(query_result)
```
-```txt title="Output"
+```txt OUTPUT
[95mThe Adam Project (3000) -147.31755[0m
[4mhttps://en.wikipedia.org/wiki?curid=65867428[0m
Due to a safety feature preventing him from flying because of his injuries, Adam must bring along the younger Adam and use his DNA to enter his jet. They both are soon attacked by Maya Sorian, the leader of the dystopian world, and her assistant Christos, but are saved by Laura, who had faked her death and stayed off-grid in an unknown location. After surviving the attack and comparing notes, Laura and the Adams realize that after the invention of time travel by Louis Reed and his subsequent death, Sorian had monopolized the discovery. During her visit to 2018, Laura learned Sorian frequently came and advised her past self in order to secure her future wealth and power. To protect her secret, Sorian ordered Laura's death. Although Laura survived the assassination attempt, destruction of her time jet left her stranded in the past. The sudden arrival of Sorian's goons interrupts the reunion, and Laura fights off the attack long enough for the two Adams to escape to 2018.
@@ -136,7 +136,7 @@ query_result = semantic_serch("time travel plot twist", results_lang='ja')
print_result(query_result)
```
-```txt title="Output"
+```txt OUTPUT
[95m時空の旅人 (500) -144.16002[0m
[4mhttps://ja.wikipedia.org/wiki?curid=523464[0m
バスは1868年の攘夷戦争で娘と夫を亡くした老婆の営む茶店に降り立つ。一時は老婆は一行を盗人と間違えて襲い掛かるも、ホクベンを亡き夫だと思い込んだことで一転して歓迎する。しかしそこへジロを追うタイムマシンがあらわれ、やむなく一行はバスに乗って走り去る。追い縋る老婆を見捨てられずバスを飛び降りたホクベンだが、直後にタイムマシンに攫われてしまった。
diff --git a/fern/pages/cookbooks/wikipedia-semantic-search.mdx b/fern/pages/cookbooks/wikipedia-semantic-search.mdx
index 8e7914e0..9a6d5b2b 100644
--- a/fern/pages/cookbooks/wikipedia-semantic-search.mdx
+++ b/fern/pages/cookbooks/wikipedia-semantic-search.mdx
@@ -35,7 +35,7 @@ for doc in docs_stream:
doc_embeddings = torch.tensor(doc_embeddings)
```
-```txt title="Output"
+```txt OUTPUT
Downloading: 0%| | 0.00/1.29k [00:00, ?B/s]
@@ -48,7 +48,7 @@ Now, `doc_embeddings` holds the embeddings of the first 1,000 documents in the d
doc_embeddings.shape
```
-```txt title="Output"
+```txt OUTPUT
torch.Size([1000, 768])
```
@@ -73,7 +73,7 @@ for doc_id in top_k.indices[0].tolist():
```
-```txt title="Output"
+```txt OUTPUT
Query: Who founded Wikipedia
Wikipedia
Larry Sanger and Jimmy Wales are the ones who started Wikipedia. Wales is credited with defining the goals of the project. Sanger created the strategy of using a wiki to reach Wales' goal. On January 10, 2001, Larry Sanger proposed on the Nupedia mailing list to create a wiki as a "feeder" project for Nupedia. Wikipedia was launched on January 15, 2001. It was launched as an English-language edition at www.wikipedia.com, and announced by Sanger on the Nupedia mailing list. Wikipedia's policy of "neutral point-of-view" was enforced in its initial months, and was similar to Nupedia's earlier "nonbiased" policy. Otherwise, there weren't very many rules initially, and Wikipedia operated independently of Nupedia.
diff --git a/fern/pages/deployment-options/cohere-works-everywhere.mdx b/fern/pages/deployment-options/cohere-works-everywhere.mdx
index 1fb04981..104b48f5 100644
--- a/fern/pages/deployment-options/cohere-works-everywhere.mdx
+++ b/fern/pages/deployment-options/cohere-works-everywhere.mdx
@@ -6,6 +6,7 @@ hidden: false
createdAt: "Thu Jun 06 2024 10:53:49 GMT+0000 (Coordinated Universal Time)"
updatedAt: "Tue Jun 18 2024 16:38:28 GMT+0000 (Coordinated Universal Time)"
---
+
To maximize convenience in building on and switching between Cohere-supported environments, we have developed SDKs that seamlessly support whichever backend you choose. This allows you to start developing your project with one backend while maintaining the flexibility to switch, should the need arise.
Note that the code snippets presented in this document should be more than enough to get you started, but if you end up switching from one environment to another there will be some small changes you need to make to how you import and initialize the SDK.
@@ -14,56 +15,57 @@ Note that the code snippets presented in this document should be more than enoug
The table below summarizes the environments in which Cohere models can be deployed. You'll notice it contains many links; the links in the "sdk" column take you to Github pages with more information on Cohere's language-specific SDKs, while all the others take you to relevant sections in this document.
-| sdk | [Cohere platform](/reference/about) | [Bedrock](https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-cohere.html) | Sagemaker | Azure | OCI | Cohere Toolkit |
-| ------------------------------------------------------------ | ---------------------------------------------------------- | -------------------------------------------------------------------------------------------- | ------------------------------- | --------------------------- | -------------------------- | ------------------------------ |
-| [Typescript](https://github.com/cohere-ai/cohere-typescript) | [✅ docs](#typescript/platform) | [✅ docs](#typescript/bedrock) | [✅ docs](#typescript/sagemaker) | [✅ docs](#typescript/azure) | [🟠 soon](#typescript/oci) | [🟠 soon](#typescript/toolkit) |
-| [Python](https://github.com/cohere-ai/cohere-python) | [✅ docs](#python/platform) | [✅ docs](#python/bedrock) | [✅ docs](#python/sagemaker) | [✅ docs](#python/azure) | [🟠 soon](#python/oci) | [🟠 soon](#python/toolkit) |
-| [Go](https://github.com/cohere-ai/cohere-go) | [✅ docs](#go/platform) | [🟠 soon](#go/bedrock) | [🟠 soon](#go/sagemaker) | [✅ docs](#go/azure) | [🟠 soon](#go/oci) | [🟠 soon](#go/toolkit) |
-| [Java](https://github.com/cohere-ai/cohere-java) | [✅ docs](#java/platform) | [🟠 soon](#java/bedrock) | [🟠 soon](#java/sagemaker) | [✅ docs](#java/azure) | [🟠 soon](#java/oci) | [🟠 soon](#java/toolkit) |
+| sdk | [Cohere platform](/reference/about) | [Bedrock](https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-cohere.html) | Sagemaker | Azure | OCI | Cohere Toolkit |
+| ------------------------------------------------------------ | ----------------------------------- | -------------------------------------------------------------------------------------------- | -------------------------------- | ---------------------------- | -------------------------- | ------------------------------ |
+| [Typescript](https://github.com/cohere-ai/cohere-typescript) | [✅ docs](#typescript/platform) | [✅ docs](#typescript/bedrock) | [✅ docs](#typescript/sagemaker) | [✅ docs](#typescript/azure) | [🟠 soon](#typescript/oci) | [🟠 soon](#typescript/toolkit) |
+| [Python](https://github.com/cohere-ai/cohere-python) | [✅ docs](#python/platform) | [✅ docs](#python/bedrock) | [✅ docs](#python/sagemaker) | [✅ docs](#python/azure) | [🟠 soon](#python/oci) | [🟠 soon](#python/toolkit) |
+| [Go](https://github.com/cohere-ai/cohere-go) | [✅ docs](#go/platform) | [🟠 soon](#go/bedrock) | [🟠 soon](#go/sagemaker) | [✅ docs](#go/azure) | [🟠 soon](#go/oci) | [🟠 soon](#go/toolkit) |
+| [Java](https://github.com/cohere-ai/cohere-java) | [✅ docs](#java/platform) | [🟠 soon](#java/bedrock) | [🟠 soon](#java/sagemaker) | [✅ docs](#java/azure) | [🟠 soon](#java/oci) | [🟠 soon](#java/toolkit) |
## Feature support
The most complete set of features is found on the cohere platform, while each of the cloud platforms support subsets of these features. Please consult the platform-specific documentation for more information about the parameters that they support.
-| Feature | Cohere Platform | Bedrock | Sagemaker | Azure | OCI | Cohere Toolkit |
-| --------------- | --------------- | ----------- | ----------- | ----------- | ----------- | -------------- |
-| chat_stream | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
-| chat | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
-| generate_stream | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
-| generate | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
-| embed | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
-| rerank | ✅ | ⬜️ | ✅ | ⬜️ | ⬜️ | ✅ |
-| classify | ✅ | ⬜️ | ⬜️ | ⬜️ | ⬜️ | ✅ |
-| summarize | ✅ | ⬜️ | ⬜️ | ⬜️ | ⬜️ | ✅ |
-| tokenize | ✅ | ✅ (offline) | ✅ (offline) | ✅ (offline) | ✅ (offline) | ✅ (offline) |
-| detokenize | ✅ | ✅ (offline) | ✅ (offline) | ✅ (offline) | ✅ (offline) | ✅ (offline) |
-| check_api_key | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
+| Feature | Cohere Platform | Bedrock | Sagemaker | Azure | OCI | Cohere Toolkit |
+| --------------- | --------------- | ------------ | ------------ | ------------ | ------------ | -------------- |
+| chat_stream | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
+| chat | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
+| generate_stream | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
+| generate | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
+| embed | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
+| rerank | ✅ | ⬜️ | ✅ | ⬜️ | ⬜️ | ✅ |
+| classify | ✅ | ⬜️ | ⬜️ | ⬜️ | ⬜️ | ✅ |
+| summarize | ✅ | ⬜️ | ⬜️ | ⬜️ | ⬜️ | ✅ |
+| tokenize | ✅ | ✅ (offline) | ✅ (offline) | ✅ (offline) | ✅ (offline) | ✅ (offline) |
+| detokenize | ✅ | ✅ (offline) | ✅ (offline) | ✅ (offline) | ✅ (offline) | ✅ (offline) |
+| check_api_key | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
## Snippets
### Typescript
-#### Cohere Platform
+#### Cohere Platform
-```typescript TS
-const { CohereClient } = require('cohere-ai');
+```typescript TYPESCRIPT
+const { CohereClient } = require("cohere-ai");
const cohere = new CohereClient({
- token: 'Your API key',
+ token: "Your API key",
});
(async () => {
const response = await cohere.chat({
chatHistory: [
- { role: 'USER', message: 'Who discovered gravity?' },
+ { role: "USER", message: "Who discovered gravity?" },
{
- role: 'CHATBOT',
- message: 'The man who is widely credited with discovering gravity is Sir Isaac Newton',
+ role: "CHATBOT",
+ message:
+ "The man who is widely credited with discovering gravity is Sir Isaac Newton",
},
],
- message: 'What year was he born?',
+ message: "What year was he born?",
// perform web search before answering the question. You can also use your own custom connector.
- connectors: [{ id: 'web-search' }],
+ connectors: [{ id: "web-search" }],
});
console.log(response);
@@ -72,8 +74,8 @@ const cohere = new CohereClient({
#### Bedrock
-```typescript TS
-const { BedrockClient } = require('cohere-ai');
+```typescript TYPESCRIPT
+const { BedrockClient } = require("cohere-ai");
const cohere = new BedrockClient({
awsRegion: "us-east-1",
@@ -86,13 +88,14 @@ const cohere = new BedrockClient({
const response = await cohere.chat({
model: "cohere.command-r-plus-v1:0",
chatHistory: [
- { role: 'USER', message: 'Who discovered gravity?' },
+ { role: "USER", message: "Who discovered gravity?" },
{
- role: 'CHATBOT',
- message: 'The man who is widely credited with discovering gravity is Sir Isaac Newton',
+ role: "CHATBOT",
+ message:
+ "The man who is widely credited with discovering gravity is Sir Isaac Newton",
},
],
- message: 'What year was he born?',
+ message: "What year was he born?",
});
console.log(response);
@@ -101,8 +104,8 @@ const cohere = new BedrockClient({
#### Sagemaker
-```typescript TS
-const { SagemakerClient } = require('cohere-ai');
+```typescript TYPESCRIPT
+const { SagemakerClient } = require("cohere-ai");
const cohere = new SagemakerClient({
awsRegion: "us-east-1",
@@ -115,13 +118,14 @@ const cohere = new SagemakerClient({
const response = await cohere.chat({
model: "my-endpoint-name",
chatHistory: [
- { role: 'USER', message: 'Who discovered gravity?' },
+ { role: "USER", message: "Who discovered gravity?" },
{
- role: 'CHATBOT',
- message: 'The man who is widely credited with discovering gravity is Sir Isaac Newton',
+ role: "CHATBOT",
+ message:
+ "The man who is widely credited with discovering gravity is Sir Isaac Newton",
},
],
- message: 'What year was he born?',
+ message: "What year was he born?",
});
console.log(response);
@@ -130,33 +134,35 @@ const cohere = new SagemakerClient({
#### Azure
-```typescript TS
-const { CohereClient } = require('cohere-ai');
+```typescript TYPESCRIPT
+const { CohereClient } = require("cohere-ai");
const cohere = new CohereClient({
token: "",
- environment: "https://Cohere-command-r-plus-phulf-serverless.eastus2.inference.ai.azure.com/v1",
+ environment:
+ "https://Cohere-command-r-plus-phulf-serverless.eastus2.inference.ai.azure.com/v1",
});
(async () => {
const response = await cohere.chat({
chatHistory: [
- { role: 'USER', message: 'Who discovered gravity?' },
+ { role: "USER", message: "Who discovered gravity?" },
{
- role: 'CHATBOT',
- message: 'The man who is widely credited with discovering gravity is Sir Isaac Newton',
+ role: "CHATBOT",
+ message:
+ "The man who is widely credited with discovering gravity is Sir Isaac Newton",
},
],
- message: 'What year was he born?',
+ message: "What year was he born?",
});
console.log(response);
})();
```
-### Python
+### Python
-#### Cohere Platform
+#### Cohere Platform
```python PYTHON
import cohere
@@ -257,9 +263,9 @@ response = co.chat(
print(response)
```
-### Go
+### Go
-#### Cohere Platform
+#### Cohere Platform
```go GO
package main
@@ -440,7 +446,7 @@ func main() {
### Java
-#### Cohere Platform
+#### Cohere Platform
```java JAVA
import com.cohere.api.Cohere;
@@ -495,4 +501,3 @@ public class ChatPost {
}
}
```
-
diff --git a/fern/pages/deployment-options/getting-started-with-coral-toolkit.mdx b/fern/pages/deployment-options/getting-started-with-coral-toolkit.mdx
index 12698ab0..ab4c6c91 100644
--- a/fern/pages/deployment-options/getting-started-with-coral-toolkit.mdx
+++ b/fern/pages/deployment-options/getting-started-with-coral-toolkit.mdx
@@ -6,52 +6,61 @@ hidden: true
createdAt: "Mon Mar 18 2024 17:43:10 GMT+0000 (Coordinated Universal Time)"
updatedAt: "Tue Mar 19 2024 15:31:41 GMT+0000 (Coordinated Universal Time)"
---
-With Cohere's decision to open source some of our Github repositories, it has now become possible to launch your own Coral web application in under a minute. This will make it much easier to leverage the power of large language models for a variety of use cases.
+
+With Cohere's decision to open source some of our Github repositories, it has now become possible to launch your own Coral web application in under a minute. This will make it much easier to leverage the power of large language models for a variety of use cases.
## Setting up
-To begin, make sure you have the with the SDK installed (the examples below are in Python, Typescript, and Go):
+To begin, make sure you have the with the SDK installed (the examples below are in Python, Typescript, and Go):
+
```python PYTHON
pip install cohere
```
+
```typescript TYPESCRIPT
npm i -s cohere-ai
```
+
```go GO
go get github.com/cohere-ai/cohere-go/v2
```
+
Import dependencies and set up the Cohere client.
+
```python PYTHON
import cohere
co = cohere.Client('Your API key')
```
+
```typescript TYPESCRIPT
import { CohereClient } from "cohere-ai";
const cohere = new CohereClient({
- token: "YOUR_API_KEY",
+ token: "YOUR_API_KEY",
});
(async () => {
- const prediction = await cohere.generate({
- prompt: "hello",
- maxTokens: 10,
- });
-
- console.log("Received prediction", prediction);
+ const prediction = await cohere.generate({
+ prompt: "hello",
+ maxTokens: 10,
+ });
+
+ console.log("Received prediction", prediction);
})();
```
+
```go GO
import cohereclient "github.com/cohere-ai/cohere-go/v2/client"
client := cohereclient.NewClient(cohereclient.WithToken(""))
```
+
(All the rest of the examples on this page will be in Python, but you can find more detailed instructions for getting set up by checking out the Github repositories for [Python](https://github.com/cohere-ai/cohere-python), [Typescript](https://github.com/cohere-ai/cohere-typescript), and [Go](https://github.com/cohere-ai/cohere-go).)
diff --git a/fern/pages/fine-tuning/classify-fine-tuning/classify-starting-the-training.mdx b/fern/pages/fine-tuning/classify-fine-tuning/classify-starting-the-training.mdx
index 3dfe34cf..df21f4ea 100644
--- a/fern/pages/fine-tuning/classify-fine-tuning/classify-starting-the-training.mdx
+++ b/fern/pages/fine-tuning/classify-fine-tuning/classify-starting-the-training.mdx
@@ -4,12 +4,13 @@ slug: "docs/classify-starting-the-training"
hidden: false
description: "This document provides a guide on how to start training a fine-tuning model for Classification using both the Web UI and the Python SDK, including steps for creating a model, uploading data, previewing data, starting training, and calling the fine-tuned model."
-image: "../../../assets/images/3fe7824-cohere_meta_image.jpg"
+image: "../../../assets/images/3fe7824-cohere_meta_image.jpg"
keywords: "classification models, fine-tuning language models, fine-tuning"
createdAt: "Fri Nov 10 2023 18:14:01 GMT+0000 (Coordinated Universal Time)"
updatedAt: "Thu Jun 13 2024 13:10:55 GMT+0000 (Coordinated Universal Time)"
---
+
In this section, we will walk through how you can start training a fine-tuning model for Classification with both the [Web UI](/docs/fine-tuning-with-the-web-ui) and the Python SDK.
## Web UI
@@ -20,22 +21,20 @@ Creating a fine-tuned model for Classification with the Web UI consists of a few
Go to the [fine-tuning page](http://dashboard.cohere.com/fine-tuning) and click on 'Create a Classify model'.
-
-
+
### Upload Your Data
-Upload your custom dataset data by going to 'Training data' and clicking on the upload file button. Your data should be in `csv` or `.jsonl` format with exactly two columns—the first column consisting of the examples, and the second consisting of the labels.
-
-
+Upload your custom dataset data by going to 'Training data' and clicking on the upload file button. Your data should be in `csv` or `.jsonl` format with exactly two columns—the first column consisting of the examples, and the second consisting of the labels.
+
You also have the option of uploading a validation dataset. This will not be used during training, but will be used for evaluating the model’s performance post-training. To upload a validation set, go to 'Upload validation set (optional)' and repeat the same steps you just went through with the training dataset. If you don’t upload a validation dataset, the platform will automatically set aside part of the training dataset to use for validation.
At this point in time, if there are labels in the training set with less than five unique examples, those labels will be removed.
-
+
Once done, click 'Next'.
@@ -44,15 +43,13 @@ Once done, click 'Next'.
The preview window will show a few samples of your custom training dataset, and your validation dataset (if you uploaded it).
-Toggle between the 'Training' and 'Validation' tabs to see a sample of your respective datasets.
-
-
+Toggle between the 'Training' and 'Validation' tabs to see a sample of your respective datasets.
+
At the bottom of this page, the distribution of labels in each respective dataset is shown.
-
-
+
If you are happy with how the samples look, click 'Continue'.
@@ -64,13 +61,11 @@ Now, everything is set for training to begin! Click 'Start training' to proceed.
Once your model completes training, you can call it by selecting 'Try in Playground' from the [model list view](https://dashboard.cohere.com/fine-tuning?tab=models):
-
-
+
or by selecting 'Try in Playground' from the model overview page:
-
-
+
Or, you can go to the [classify playground](https://dashboard.cohere.com/playground/classify) and select your model from the drop down menu:
@@ -102,7 +97,7 @@ single_label_dataset = co.datasets.create(name="single-label-dataset",
type="single-label-finetune-input",
parse_info=ParseInfo(delimiter=",")) # parse_info is optional
print(single_label_dataset.await_validation())
-
+
# start the fine-tune job using this dataset
finetune = co.finetuning.create_finetuned_model(
request=FinetunedModel(
@@ -126,9 +121,9 @@ print(f"fine-tune ID: {finetune.id}, fine-tune status: {finetune.status}"
multi_label_dataset = co.create_dataset(name="multi-label-dataset",
data=open("path/to/train.jsonl", "rb"),
dataset_type="multi-label-finetune-input")
-
+
print(multi_label_dataset.await_validation())
-
+
# start the fine-tune job using this dataset
finetune = co.finetuning.create_finetuned_model(
request=FinetunedModel(
@@ -147,7 +142,7 @@ print(f"fine-tune ID: {finetune.id}, fine-tune status: {finetune.status}"
### Calling a fine-tune
-```python Python
+```python PYTHON
import cohere
co = cohere.Client('Your API key')
diff --git a/fern/pages/fine-tuning/rerank-fine-tuning/rerank-starting-the-training.mdx b/fern/pages/fine-tuning/rerank-fine-tuning/rerank-starting-the-training.mdx
index f7f9f07d..afaf569d 100644
--- a/fern/pages/fine-tuning/rerank-fine-tuning/rerank-starting-the-training.mdx
+++ b/fern/pages/fine-tuning/rerank-fine-tuning/rerank-starting-the-training.mdx
@@ -4,13 +4,14 @@ slug: "docs/rerank-starting-the-training"
hidden: false
description: "This document provides a guide on how to start training a fine-tuning model for Rerank using both the Web UI and the Python SDK, including steps for creating the model, uploading data, previewing the data, starting training, and calling the fine-tuned model."
-image: "../../../assets/images/062ae18-cohere_meta_image.jpg"
+image: "../../../assets/images/062ae18-cohere_meta_image.jpg"
keywords: "fine-tuning, fine-tuning language models"
createdAt: "Mon Nov 13 2023 19:52:04 GMT+0000 (Coordinated Universal Time)"
updatedAt: "Tue May 07 2024 21:37:02 GMT+0000 (Coordinated Universal Time)"
---
-In this section, we will walk through how you can start training a fine-tuning model for Rerank on both the Web UI and the Python SDK.
+
+In this section, we will walk through how you can start training a fine-tuning model for Rerank on both the Web UI and the Python SDK.
## Web UI
@@ -20,30 +21,27 @@ Creating a fine-tuned model for Rerank via the Web UI consists of a few simple s
Go to the [fine-tuning page](http://dashboard.cohere.com/fine-tuning) and click on 'Create a Rerank model'.
-
-
+
### Upload Your Data
-Upload your custom dataset data by going to 'Training data' and clicking on the upload file button. Your data should be in `jsonl` format with three fields: `query`, `relevant_passages`, and `hard_negatives`.
+Upload your custom dataset data by going to 'Training data' and clicking on the upload file button. Your data should be in `jsonl` format with three fields: `query`, `relevant_passages`, and `hard_negatives`.
- `query`: this field contains the question or target
- `relevant_passages`: this field contains a list of documents or passages with information that answers the `query`. For every query there must be at least one `relevant_passage`
- `hard_negatives`: this represents examples that appear to be relevant to the query but ultimately are not because they don’t contain the answer. They differ from easy negatives which are totally unrelated to the query. Hard negatives are optional but providing them lead to improvements of the overall performance. We believe ~five hard negatives leads to meaningful improvement, so include that many, if possible.
-
-
+
You also have the option of uploading a validation dataset. This will not be used during training, but will be used for evaluating the model’s performance during training. To do so, go to 'Upload validation set (optional)' and repeat the same steps you just completed with the training dataset. If you don’t upload a validation dataset, the platform will automatically set aside part of the training dataset to use for validation.
At this point in time, the platform will error if you upload a query in which a passage is listed as both a relevant passage and a hard negative
-
+
-
-In addition, if your `hard_negatives` are empty strings or duplicated in a given row, we will remove those from the training set as well.
+In addition, if your `hard_negatives` are empty strings or duplicated in a given row, we will remove those from the training set as well.
Once done, click 'Next'.
@@ -51,13 +49,11 @@ Once done, click 'Next'.
The preview window will show a few samples of your custom training dataset, and your validation dataset (if you uploaded it).
-
+
+Toggle between the 'Training' and 'Validation' tabs to see a sample of your respective datasets.
-Toggle between the 'Training' and 'Validation' tabs to see a sample of your respective datasets.
-
-
-
+
At the top of this page, we will show some dataset statistics, such as the average number of relevant passages per query and the average number of hard negatives per query. We will also display a total of three queries from your dataset so you can check for formatting.
@@ -67,8 +63,7 @@ If you are happy with how the samples look, click 'Continue'.
Now, everything is set for training to begin. Click 'Start training' to proceed.
-
-
+
### Calling the Fine-tuned Model
@@ -92,7 +87,7 @@ rerank_dataset = co.datasets.create(name="rerank-dataset",
data=open("path/to/train.jsonl", "rb"),
type="reranker-finetune-input")
print(co.wait(rerank_dataset))
-
+
# start the fine-tune job using this dataset
finetune = co.finetuning.create_finetuned_model(
request=FinetunedModel(
@@ -114,12 +109,12 @@ print(f"fine-tune ID: {finetune.id}, fine-tune status: {finetune.status}")
Please see our API docs for the full documentation, for passing the request. For base_model, we currently have 2 parameters for rerank:
-- `base_type` - For rerank, this should always be "BASE_TYPE_RERANK"
+- `base_type` - For rerank, this should always be "BASE_TYPE_RERANK"
- `name`(str) – The baseline rerank model you would like to train - we currently have two model options: english and multilingual. By default we will always train on the most recent version of the rerank models.
### Calling a fine-tune
-```python Python
+```python PYTHON
import cohere
co = cohere.Client('Your API key')
diff --git a/fern/pages/integrations/integrations/redis-and-cohere.mdx b/fern/pages/integrations/integrations/redis-and-cohere.mdx
index fb1d4188..716a9a74 100644
--- a/fern/pages/integrations/integrations/redis-and-cohere.mdx
+++ b/fern/pages/integrations/integrations/redis-and-cohere.mdx
@@ -11,7 +11,11 @@ createdAt: "Mon Feb 26 2024 22:22:44 GMT+0000 (Coordinated Universal Time)"
updatedAt: "Thu May 23 2024 17:06:49 GMT+0000 (Coordinated Universal Time)"
---
-
+
[RedisVL](https://www.redisvl.com/) provides a powerful, dedicated Python client library for using Redis as a Vector Database. This walks through how to integrate [Cohere embeddings](/docs/embeddings) with Redis using a dataset of Wikipedia articles to set up a pipeline for semantic search. It will cover:
@@ -144,7 +148,7 @@ index.create(overwrite=True)
Note that we are using `SearchIndex.from_yaml` because we are choosing to import the schema from a yaml file, we could also do `SearchIndex.from_dict` as well.
-```curl CURL
+```bash CURL
!rvl index listall
```
@@ -157,7 +161,7 @@ The above code checks to see if an index has been created. If it has, you should
Look inside the index to make sure it matches the schema you want
-```curl CURL
+```bash CURL
!rvl index info -i semantic_search_demo
```
@@ -189,7 +193,7 @@ Index Fields:
You can also visit: [http://localhost:8001/redis-stack/browser](http://localhost:8001/redis-stack/browser). The Redis GUI will show you the index in realtime.
-
+
## Loading your Documents and Embedding them into Redis:
diff --git a/fern/pages/llm-university/intro-deployment/cohere-google-sheets-apps-script.mdx b/fern/pages/llm-university/intro-deployment/cohere-google-sheets-apps-script.mdx
index cd1094b2..c41ae92a 100644
--- a/fern/pages/llm-university/intro-deployment/cohere-google-sheets-apps-script.mdx
+++ b/fern/pages/llm-university/intro-deployment/cohere-google-sheets-apps-script.mdx
@@ -6,6 +6,7 @@ hidden: false
createdAt: "Mon Jun 05 2023 07:11:52 GMT+0000 (Coordinated Universal Time)"
updatedAt: "Thu Apr 04 2024 03:48:00 GMT+0000 (Coordinated Universal Time)"
---
+
_The source code used for this example is available in our [GitHub repository](https://github.com/cohere-ai/notebooks/tree/main/notebooks/llmu/examples/deploy_google_apps_script) _.
This tutorial demonstrates how to use Cohere and [Google Apps Script](https://developers.google.com/apps-script/guides/sheets), a scripting language based on JavaScript, to build two custom functions that add large language model (LLM) capabilities to Google Sheets documents. We'll be classifying the sentiment and generating summaries of movie reviews.
@@ -30,15 +31,15 @@ The first step is to create a set of examples needed by the Classify endpoint to
The Classify endpoint requires a minimum of two training examples per class, though more training examples are generally better. You can find the examples we'll use for running movie reviews sentiment analysis in the `Classifier Examples` worksheet. It contains five examples for each class – `positive` and `negative`.
![](../../../assets/images/e1871e9-classification-training-examples.png)
+
### Step 2: Create a Custom Apps Script Function for Text Classification
-Now we are going to create the script for the first task: sentiment analysis. In Google Sheets, go to `Extensions` and then go to `Apps Script`, which will take us to the Apps Script environment.
+Now we are going to create the script for the first task: sentiment analysis. In Google Sheets, go to `Extensions` and then go to `Apps Script`, which will take us to the Apps Script environment.
Next, we’ll use the Classify endpoint to build our text classification function, starting with the code below. It is a basic function definition taking in two parameters. The first one (`s_args`) is the range of cells for the training examples we defined in the previous section. The second one (`s_input`) is the new text we want to label or classify.
-```javascript
-function classify(s_args, s_input) {
-}
+```javascript classify_summarize.js
+function classify(s_args, s_input) {}
```
Let’s now add the logic between the curly braces. First, we extract the training examples from `s_args` and package them into something that the Classify endpoint can ingest.
@@ -49,40 +50,47 @@ Next, we parse the response and extract the classification and confidence level.
Finally, we output the parsed response back into the cell where we call the function.
-```javascript
+```javascript classify_summarize.js
function classify(s_args, s_input) {
-
- // we start by extracting the training data from the specified range in s_args and packaging it into a prompt
- let inputs = [s_input];
- let examples = [];
- for (let i = 0; i< s_args.length; i++) {
- //this is assuming label is always the 2nd parameter
- examples.push({"text" : s_args[i][0] , "label" : s_args[i][1]})
- }
-
- const payload_for_classification = {
- examples,
- inputs,
- "model" : "embed-english-v2.0",
- };
-
- const options = {
- 'headers' : { 'Authorization' : 'Bearer ' + TOKEN },
- 'method' : 'post',
- 'contentType': 'application/json',
- 'payload' : JSON.stringify(payload_for_classification)
- };
-
- const response = UrlFetchApp.fetch('https://api.cohere.ai/v1/classify', options);
- const responseContent = response.getContentText();
- const responseContentText = JSON.parse(response.getContentText());
- const classificationContent = responseContentText.classifications[0];
- const ourPrediction = classificationContent.prediction;
- const confidence = (classificationContent.confidence * 100).toFixed(2) + "%";
- let displayString = "Prediction: " + ourPrediction.toUpperCase() + " review\n" + "Confidence Level: " + confidence + "\n";
- return displayString;
-
+ // we start by extracting the training data from the specified range in s_args and packaging it into a prompt
+ let inputs = [s_input];
+ let examples = [];
+ for (let i = 0; i < s_args.length; i++) {
+ //this is assuming label is always the 2nd parameter
+ examples.push({ text: s_args[i][0], label: s_args[i][1] });
}
+
+ const payload_for_classification = {
+ examples,
+ inputs,
+ model: "embed-english-v2.0",
+ };
+
+ const options = {
+ headers: { Authorization: "Bearer " + TOKEN },
+ method: "post",
+ contentType: "application/json",
+ payload: JSON.stringify(payload_for_classification),
+ };
+
+ const response = UrlFetchApp.fetch(
+ "https://api.cohere.ai/v1/classify",
+ options
+ );
+ const responseContent = response.getContentText();
+ const responseContentText = JSON.parse(response.getContentText());
+ const classificationContent = responseContentText.classifications[0];
+ const ourPrediction = classificationContent.prediction;
+ const confidence = (classificationContent.confidence * 100).toFixed(2) + "%";
+ let displayString =
+ "Prediction: " +
+ ourPrediction.toUpperCase() +
+ " review\n" +
+ "Confidence Level: " +
+ confidence +
+ "\n";
+ return displayString;
+}
```
### Step 3: Create a Custom Apps Script Function for Text Summarization
@@ -97,30 +105,31 @@ We create a simple prompt asking the endpoint to provide a summary in about ten
Finally, we parse the response and output the summary.
-```javascript
+```javascript classify_summarize.js
function summarize(cell_value) {
-
- let raw = JSON.stringify({
- "message": "Summarize this in about 10 words: " + cell_value,
- "temperature": 0.2,
- "model": "command-r",
- });
- let requestOptions = {
- 'method': 'post',
- 'muteHttpExceptions': true,
- 'contentType': 'application/json',
- 'headers': {
- 'Authorization': 'Bearer ' + TOKEN
- },
- 'payload': raw,
- redirect: 'follow'
- };
-
- let response = UrlFetchApp.fetch("https://api.cohere.ai/v1/chat", requestOptions)
- let responseContentTxt = JSON.parse(response.getContentText());
- let summarizedTxt = "SUMMARY: " + responseContentTxt.text + "\n";
- return summarizedTxt;
-
+ let raw = JSON.stringify({
+ message: "Summarize this in about 10 words: " + cell_value,
+ temperature: 0.2,
+ model: "command-r",
+ });
+ let requestOptions = {
+ method: "post",
+ muteHttpExceptions: true,
+ contentType: "application/json",
+ headers: {
+ Authorization: "Bearer " + TOKEN,
+ },
+ payload: raw,
+ redirect: "follow",
+ };
+
+ let response = UrlFetchApp.fetch(
+ "https://api.cohere.ai/v1/chat",
+ requestOptions
+ );
+ let responseContentTxt = JSON.parse(response.getContentText());
+ let summarizedTxt = "SUMMARY: " + responseContentTxt.text + "\n";
+ return summarizedTxt;
}
```
@@ -134,8 +143,7 @@ Now, we get to the part we’ve been waiting for – classifying and summarizing
For this, we’ll move back to the Google Sheets view and go to the `Demo` worksheet. It has three columns, as shown below.
-
-
+
The first column is the input text, the movie review that we want to classify its sentiment and summarize. Here, we have a review of The Batman.
@@ -143,7 +151,7 @@ In the second column, we call our text classification function as follows.
`=classify('Classifier Examples'!$A$3:$B$12,A3)`
-Recall that this function takes in two parameters. The first one is the list of training examples. Our training examples are provided in the `Classifier Examples` tab between cells A3 and B12.
+Recall that this function takes in two parameters. The first one is the list of training examples. Our training examples are provided in the `Classifier Examples` tab between cells A3 and B12.
The second parameter is `A3`, the review of The Batman. When we hit enter, it predicts the class, as shown in the screenshot above.
diff --git a/fern/pages/llm-university/intro-deployment/deploying-with-chrome-extension.mdx b/fern/pages/llm-university/intro-deployment/deploying-with-chrome-extension.mdx
index 81a51853..e49bbbd2 100644
--- a/fern/pages/llm-university/intro-deployment/deploying-with-chrome-extension.mdx
+++ b/fern/pages/llm-university/intro-deployment/deploying-with-chrome-extension.mdx
@@ -32,7 +32,7 @@ The first step is to create a folder for this extension, which we’ll call `con
First, we’ll create a function for text display, which will display the generated summary of a web page at the top of the page. This function, `display(text)`, creates the elements and styling for the header and the summary. Finally, it inserts the header immediately before the HTML body.
-```javascript
+```javascript condense/scripts/content.js
// Display the text at the top of the page
function display(text) {
// Create a purple header
@@ -61,7 +61,7 @@ Next, we’ll create a function, `summarize(text)`, to perform the text summariz
This function then takes the endpoint response, `response.text`, and sends it to the `display(text)` function we created in the previous section.
-```javascript
+```javascript condense/scripts/content.js
// Fetch the summary for the given text and display it
function summarize(text) {
// Use the user's stored API key
@@ -105,7 +105,7 @@ function summarize(text) {
Now we create a function that feeds the text needed by the `summarize(text)` function. Here, `getVisibleText()` returns only the visible text from the page. It uses jQuery selectors (requires jQuery v3.7.0 to be added to the `scripts` folder) to try to find the page's main body of content, often in a `content` or `main` element. It also defines a fallback to using the whole body content.
-```javascript
+```javascript condense/scripts/content.js
// Returns true if the given element isn't visible on the page
function isHidden(el) {
var style = window.getComputedStyle(el);
@@ -156,7 +156,7 @@ function getVisibleText() {
The following code block stitches together the three steps we created to capture visible text from a web page, summarize it, and display it at the top.
-```javascript
+```javascript condense/scripts/content.js
// This code block runs when pages are loaded.
chrome.storage.sync.get("apiKey", (key) => {
if (key.apiKey === undefined) {
@@ -184,42 +184,42 @@ The manifest file is required by a Chrome extension, describing how the extensio
So let’s create a `manifest.json` file at the root of our folder. The contents of the file are shown below. It contains the metadata, permissions, description of the UI for adding the API key, and the scripts to run.
-```javascript
+```json manifest.json
// This file describes how the extension is configured.
{
- // Metadata
- "name": "co:ndense",
- "version": "1.0",
- "description": "Summarizes web pages",
- "manifest_version": 3,
-
- // - activeTab gives us access to the user's currently active tab
- // - scripting gives us permission to run our code
- // - storage allows us to store the user's API key
- "permissions": ["activeTab", "scripting", "storage"],
-
- // This describes the UI for user options
- "options_ui": {
- // This page contains the UI
- "page": "options/options.html",
- // The options will open as a pop-up, not a new tab
- "open_in_tab": false
- },
-
- "content_scripts": [{
- // scripts/content.js contains the extension's main logic.
- // It relies on jQuery so we need to configure the path to that too.
- "js": ["scripts/content.js", "scripts/jquery-3.7.0.min.js"],
- // This extension will run on all webpages by default.
- "matches": [
- ""
- ],
- // Don't run the extension on these specific sites
- "exclude_globs": [
- // The header breaks cursor alignment in Google Docs
- "*://docs.google.com/*"
- ]
- }]
+ // Metadata
+ "name": "co:ndense",
+ "version": "1.0",
+ "description": "Summarizes web pages",
+ "manifest_version": 3,
+
+ // - activeTab gives us access to the user's currently active tab
+ // - scripting gives us permission to run our code
+ // - storage allows us to store the user's API key
+ "permissions": ["activeTab", "scripting", "storage"],
+
+ // This describes the UI for user options
+ "options_ui": {
+ // This page contains the UI
+ "page": "options/options.html",
+ // The options will open as a pop-up, not a new tab
+ "open_in_tab": false
+ },
+
+ "content_scripts": [
+ {
+ // scripts/content.js contains the extension's main logic.
+ // It relies on jQuery so we need to configure the path to that too.
+ "js": ["scripts/content.js", "scripts/jquery-3.7.0.min.js"],
+ // This extension will run on all webpages by default.
+ "matches": [""],
+ // Don't run the extension on these specific sites
+ "exclude_globs": [
+ // The header breaks cursor alignment in Google Docs
+ "*://docs.google.com/*"
+ ]
+ }
+ ]
}
```
diff --git a/fern/pages/text-embeddings/embed-jobs-api.mdx b/fern/pages/text-embeddings/embed-jobs-api.mdx
index 316a9419..38f255e1 100644
--- a/fern/pages/text-embeddings/embed-jobs-api.mdx
+++ b/fern/pages/text-embeddings/embed-jobs-api.mdx
@@ -4,16 +4,16 @@ slug: "docs/embed-jobs-api"
hidden: false
description: "This document provides a guide on using the Embed Jobs API to embed a large amount of text data asynchronously, with a focus on creating datasets and running embed jobs. It also explains how to set up the Cohere client, upload datasets, kick off embed jobs, and save or view the results."
-image: "../../assets/images/86af0e8-cohere_meta_image.jpg"
+image: "../../assets/images/86af0e8-cohere_meta_image.jpg"
keywords: "datasets embedding, embedding models, vector embeddings"
createdAt: "Sat Jan 13 2024 17:53:12 GMT+0000 (Coordinated Universal Time)"
updatedAt: "Mon Jun 17 2024 10:20:02 GMT+0000 (Coordinated Universal Time)"
---
-
- You can find the API reference for the api [here](/reference/create-embed-job)
-
- The Embed Jobs API is only compatible with our embed v3.0 models
+
+
+ You can find the API reference for the api [here](/reference/create-embed-job)
+ The Embed Jobs API is only compatible with our embed v3.0 models
In this guide, we show you how to use the embed jobs endpoint to asynchronously embed a large amount of texts. This guide uses a simple dataset of wikipedia pages and its associated metadata to illustrate the endpoint’s functionality. To see an end-to-end example of retrieval, check out this [notebook](https://github.com/cohere-ai/notebooks/blob/main/notebooks/Embed_Jobs_Semantic_Search.ipynb).
@@ -25,6 +25,7 @@ The Embed Jobs API was designed for users who want to leverage the power of retr
The Embed Jobs API works in conjunction with the Embed API; in production use-cases, Embed Jobs is used to stage large periodic updates to your corpus and Embed handles real-time queries and smaller real-time updates.
![](../../assets/images/0826a69-image.png)
+
### Constructing a Dataset for Embed Jobs
To create a dataset for Embed Jobs, you will need to specify the `embedding_types`, and you need to set `dataset_type` as `embed-input`. The schema of the file looks like: `text:string`.
@@ -112,7 +113,7 @@ If your dataset hits a validation error, please refer to the dataset validation
Your dataset is now ready to be embedded. Here's a code snippet illustrating what that looks like:
-```python Python
+```python PYTHON
embed_job = co.embed_jobs.create(
dataset_id=input_dataset.id,
input_type='search_document' ,
@@ -130,14 +131,14 @@ Since we’d like to search over these embeddings and we can think of them as co
The output of embed jobs is a dataset object which you can download or pipe directly to a database of your choice:
-```python Python
+```python PYTHON
output_dataset=co.datasets.get(id=embed_job.output.id)
co.utils.save(filepath='/content/embed_job_output.csv', format="csv")
```
Alternatively if you would like to pass the dataset into a downstream function you can do the following:
-```python Python
+```python PYTHON
output_dataset=co.datasets.get(id=embed_job.output.id)
results=[]
for record in output_dataset:
diff --git a/fern/pages/text-generation/connectors/connector-authentication.mdx b/fern/pages/text-generation/connectors/connector-authentication.mdx
index 7fd3c3dc..da982c0a 100644
--- a/fern/pages/text-generation/connectors/connector-authentication.mdx
+++ b/fern/pages/text-generation/connectors/connector-authentication.mdx
@@ -4,20 +4,21 @@ slug: "docs/connector-authentication"
hidden: false
description: "The document outlines three methods for authentication and authorization in Cohere: Service to Service Authentication, OAuth 2.0, and Pass-Through. It provides detailed instructions on how to set up each method for protecting connectors from unauthorized use."
-image: "../../../assets/images/a8cf803-cohere_meta_image.jpg"
+image: "../../../assets/images/a8cf803-cohere_meta_image.jpg"
keywords: "Cohere connectors, retrieval augmented generation"
createdAt: "Fri Dec 01 2023 17:20:54 GMT+0000 (Coordinated Universal Time)"
updatedAt: "Thu May 30 2024 15:53:23 GMT+0000 (Coordinated Universal Time)"
---
+
Cohere supports three methods for authentication and authorization to protect your connectors from unauthorized use. They are:
1. Service to Service Authentication
2. OAuth 2.0
3. Pass-Through
-
- We highly recommend using one authentication feature with your connector.
+
+ We highly recommend using one authentication feature with your connector.
The Chat API sends the request to your connector with the related auth token in the `Authorization` header. Your connector should therefore expect the header to contain this auth token, and it'll capture it, verify it, and use it in the appropriate manner to access the underlying data store.
@@ -30,8 +31,7 @@ Authorization: Bearer
The first method is **service to service authentication**, through which Cohere sends requests to your connector with a single token that you generate and configure. We recommend service to service authentication when your connector is searching data that everyone in your organization has access to, such as company policies, public messaging channels, FAQs, etc.
-
-
+
To enable service level authentication, you will need to generate a token, configure your connector to verify it, and provide it to Cohere, either during registration or as part of an update.
@@ -39,14 +39,14 @@ First, start by generating a secure token. Here’s a snippet of what generating
```python PYTHON
# Generate a token
-import secrets
+import secrets
secrets.token_urlsafe(32)
```
After generating the token, you will have to configure your connector to check it. The quick start connectors should expose an environment variable for you to use. For example, the Google Drive connector exposes `CONNECTOR_API_KEY` for this purpose. After setting this environment variable, you should verify that requests without the appropriate `Authorization` header are being denied:
-```curl CURL
+```bash CURL
curl --request POST
--url 'https://connector-example.com/search'
--header 'Content-Type: application/json'
@@ -66,27 +66,30 @@ const response = await fetch('{base_connector_url}/search'{
const data = await response.json();
console.log(data);
-```
+
+````
You should also verify that requests with the correct header are successful:
-```curl CURL
-curl --request POST
+```bash CURL
+curl --request POST
--url 'https://connector-example.com/search'
- --header 'Content-Type: application/json'
- --header 'Authorization: Bearer {Connector API key}'
- --data '{
- "query": "How do I expense a meal?"
+ --header 'Content-Type: application/json'
+ --header 'Authorization: Bearer {Connector API key}'
+ --data '{
+ "query": "How do I expense a meal?"
}'
-```
+````
+
```python PYTHON
import requests
r = requests.post('{base_connector_url}/search',
data={'query': 'How do I expense a meal?'},
headers={"Authorization":"Bearer {Connector API key}"})
```
+
```typescript TYPESCRIPT
const response = await fetch('{base_connector_url}/search'{
method: 'POST',
@@ -96,12 +99,13 @@ const data = await response.json();
console.log(data);
```
+
Finally, you will have to provide Cohere with the token. You can do this during registration:
-```curl CURL
+```bash CURL
curl --request POST
--url 'https://api.cohere.ai/v1/connectors'
--header 'Authorization: Bearer {Cohere API key}'
@@ -151,7 +155,7 @@ const cohere = new CohereClient({
Or if you have already registered the connector, by performing an update:
-```curl CURL
+```bash CURL
curl --request PATCH
--url 'https://api.cohere.ai/v1/connectors/{id}'
--header 'Authorization: Bearer {Cohere API key}'
@@ -199,13 +203,12 @@ Cohere supports the "Authorization Code Grant" type of OAuth 2.0 outlined in [se
-
To enable **OAuth 2.0** for your connector, you will have to modify your connector to forward bearer tokens, configure OAuth 2.0 credentials in your data source, and provide them to Cohere, alongside other configuration details. Cohere then handles directing each user through the OAuth flow, during which process users will grant Cohere permission to query data on their behalf. Cohere also handles refreshing tokens when they expire so your connector doesn’t have to.
First you will have to modify your connector to forward the Authorization header from the request to the connector to the request to the data source. A few quickstart connectors (Google Drive and Slack) do this out of the box without any configuration, so you may wish to look at those to copy this functionality. If you have access to an API key for the service, you should be able test your connector with the following request (depending on the underlying data source; most handle personal API keys and OAuth access tokens similarly):
-```curl CURL
+```bash CURL
curl --request POST
--url https://connector-example.com/search
--header 'Content-Type: application/json'
@@ -228,85 +231,93 @@ const response = await fetch('http://connector-example.com/search'{
const data = await response.json();
console.log(data);
+
```
Next, you will need to configure OAuth 2.0 credentials in your data source. This looks different depending on the data source but when complete you should have a `client_id`, a `client_secret`, and optionally the desired `scope`s that define what Cohere can query on behalf of the user. You will also have to provide the following redirect URI as a part of the configuration:
```
+
https://api.cohere.com/v1/connectors/oauth/token
+
```
After you’ve obtained your credentials, you will also have to find the data source’s OAuth 2.0 “Authorize” and “Token” URLs. For example, Google’s OAuth 2.0 URLs look like this:
```
+
https://accounts.google.com/o/oauth2/auth
https://oauth2.googleapis.com/token
-```
+
+````
You will have to provide all of this information to Cohere during registration:
-```curl CURL
-curl --request POST
- --url 'https://api.cohere.ai/v1/connectors'
- --header 'Authorization: Bearer {Cohere API key}'
- --header 'Content-Type: application/json'
- --data '{
- "name":"test-connector",
- "description":"A test connector",
- "url":"https://connector-example.com/search",
- "oauth": {
- "client_id": "xxx-yyy.apps.googleusercontent.com",
- "client_secret": "zzz-vvv",
- "authorize_url": "https://accounts.google.com/o/oauth2/auth",
- "token_url": "https://oauth2.googleapis.com/token",
- "scope": "https://www.googleapis.com/auth/drive.readonly"
- }
+```bash CURL
+curl --request POST
+ --url 'https://api.cohere.ai/v1/connectors'
+ --header 'Authorization: Bearer {Cohere API key}'
+ --header 'Content-Type: application/json'
+ --data '{
+ "name":"test-connector",
+ "description":"A test connector",
+ "url":"https://connector-example.com/search",
+ "oauth": {
+ "client_id": "xxx-yyy.apps.googleusercontent.com",
+ "client_secret": "zzz-vvv",
+ "authorize_url": "https://accounts.google.com/o/oauth2/auth",
+ "token_url": "https://oauth2.googleapis.com/token",
+ "scope": "https://www.googleapis.com/auth/drive.readonly"
+ }
}'
-```
+````
+
```python PYTHON
-import cohere
+import cohere
co = cohere.Client('Your API key')
created_connector = co.create_connector(
name="test-connector",
url="http://connector-example.com/search",
- oauth={
- "client_id": "xxx-yyy.apps.googleusercontent.com",
- "client_secret": "zzz-vvv",
- "authorize_url": "https://accounts.google.com/o/oauth2/auth",
- "token_url": "https://oauth2.googleapis.com/token",
- "scope": "https://www.googleapis.com/auth/drive.readonly"
+ oauth={
+ "client_id": "xxx-yyy.apps.googleusercontent.com",
+ "client_secret": "zzz-vvv",
+ "authorize_url": "https://accounts.google.com/o/oauth2/auth",
+ "token_url": "https://oauth2.googleapis.com/token",
+ "scope": "https://www.googleapis.com/auth/drive.readonly"
},
)
```
+
```typescript TYPESCRIPT
const { CohereClient } = require("cohere-ai");
const cohere = new CohereClient({
- token: "<>",
+ token: "<>",
});
(async () => {
const connector = await cohere.connectors.create({
name: "test-connector",
url: "http://connector-example.com/search",
description: "A test connector",
- oauth: {
- clientId: "xxx-yyy.apps.googleusercontent.com",
- clientSecret: "zzz-vvv",
- authorizeUrl: "https://accounts.google.com/o/oauth2/auth",
- tokenUrl: "https://oauth2.googleapis.com/token",
- scope: "https://www.googleapis.com/auth/drive.readonly"
- }
+ oauth: {
+ clientId: "xxx-yyy.apps.googleusercontent.com",
+ clientSecret: "zzz-vvv",
+ authorizeUrl: "https://accounts.google.com/o/oauth2/auth",
+ tokenUrl: "https://oauth2.googleapis.com/token",
+ scope: "https://www.googleapis.com/auth/drive.readonly",
+ },
});
console.log(connector);
})();
```
+
Or if you have already registered the connector, by performing an update:
-```curl CURL
+```bash CURL
curl --request PATCH
--url 'https://api.cohere.ai/v1/connectors/{id}'
--header 'Authorization: Bearer {Cohere API key}'
@@ -361,7 +372,6 @@ The last option available for auth allows you to specify an access token per con
-
To use pass through authentication/authorization specify the access token in the chat request like so:
@@ -373,7 +383,7 @@ response = co.chat(
connectors=[{"id": "web-search", "user_access_token": "{Personal/Service API key}" }]
)
```
-```curl CURL
+```bash CURL
curl --location 'https://production.api.cohere.ai/v1/chat' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer {Your API key}' \
@@ -386,4 +396,3 @@ curl --location 'https://production.api.cohere.ai/v1/chat' \
In this example, cohere will call your `internal-docs` connector with an Authorization header `Bearer {Personal/Service API key}`.
-
diff --git a/fern/pages/text-generation/connectors/creating-and-deploying-a-connector.mdx b/fern/pages/text-generation/connectors/creating-and-deploying-a-connector.mdx
index 5a56d60c..303be282 100644
--- a/fern/pages/text-generation/connectors/creating-and-deploying-a-connector.mdx
+++ b/fern/pages/text-generation/connectors/creating-and-deploying-a-connector.mdx
@@ -2,14 +2,15 @@
title: "Creating and Deploying a Connector"
slug: "docs/creating-and-deploying-a-connector"
-hidden: false
+hidden: false
description: "This document provides a guide for implementing a connector, which involves setting up the connector and data source, configuring the connection between the connector and the Chat API, deploying the connector, and registering it with Cohere to generate grounded generations. It also includes recommendations for structuring the response from the connector to improve final generations."
-image: "../../../assets/images/cf31483-cohere_meta_image.jpg"
+image: "../../../assets/images/cf31483-cohere_meta_image.jpg"
keywords: "Cohere connectors, retrieval augmented generation"
createdAt: "Fri Dec 01 2023 17:20:28 GMT+0000 (Coordinated Universal Time)"
updatedAt: "Mon May 06 2024 19:20:26 GMT+0000 (Coordinated Universal Time)"
---
+
This section will provide a rough guide for implementing a connector. There are two major parts of this process:
1. Set up the connector and underlying data source, which requires:
@@ -24,12 +25,11 @@ We highly recommend using one of the authentication and authorization methods ou
Configure the connection between the connector and the data source:
-
-
+
The connector acts as a proxy between Chat API and the data source. The first step is to configure it to be able to connect to and search the underlying data source. Linking the connector with the data source typically requires setting up a service account in the data source, configuring the service account’s permissions, and copying appropriate credentials into the connector’s environment.
-We have provided helpful example skeleton code in this [github repository](https://github.com/cohere-ai/quick-start-connectors). Each of the quickstart connectors contains a detailed description of the necessary configuration in their README file. The skeleton code is a small example of how to connect to the data source. You will have to edit the code to make it applicable for your situation. You may, for example, customize the exact search query to better suit your needs.
+We have provided helpful example skeleton code in this [github repository](https://github.com/cohere-ai/quick-start-connectors). Each of the quickstart connectors contains a detailed description of the necessary configuration in their README file. The skeleton code is a small example of how to connect to the data source. You will have to edit the code to make it applicable for your situation. You may, for example, customize the exact search query to better suit your needs.
The quickstart connector for Google Drive, for example, requires you to create an “application” in the Google Cloud Console and copy variables into the connector’s environment, like so:
@@ -39,12 +39,12 @@ GDRIVE_SERVICE_ACCOUNT_INFO=xxxx
#### Configure the connection between the connector and the Chat API
-Once the connection between the connector and the data source is configured you will need to ensure the response and response structure of the service is compatible with the Chat API. The connector must also respond within 30 seconds for the results to be used in the final generation.
+Once the connection between the connector and the data source is configured you will need to ensure the response and response structure of the service is compatible with the Chat API. The connector must also respond within 30 seconds for the results to be used in the final generation.
-The request from the Chat API to the connector is a POST request that accepts a single `query` parameter in the body. The request URL must end with `/search` and contain no query parameters, like so:
+The request from the Chat API to the connector is a POST request that accepts a single `query` parameter in the body. The request URL must end with `/search` and contain no query parameters, like so:
-```curl CURL
+```bash CURL
curl --request POST
--url 'https://connector-example.com/search'
--header 'Content-Type: application/json'
@@ -64,42 +64,44 @@ const response = await fetch('{base_connector_url}/search'{
const data = await response.json();
console.log(data);
-```
+
+````
-The response from the connector should be a JSON object with a list of documents in the result field, like so:
+The response from the connector should be a JSON object with a list of documents in the result field, like so:
```json Example Response JSON
-{
- "results": [
- {
- "id": "expense_doc_1",
- "title": "Expense Policies for external travel",
- "text": "You can expense any work-related...",
- "url": "https://internal-site.com/expensing"
- "created_at": "2023-11-25T20:09:31Z",
- },
- {
- "id": "expense_doc_2",
- "title": "Expense Policies for internal travel",
- "text": "You can expense any work-related...",
- "created_at": "2023-11-25T20:09:31Z",
- },...
- ]
+{
+ "results": [
+ {
+ "id": "expense_doc_1",
+ "title": "Expense Policies for external travel",
+ "text": "You can expense any work-related...",
+ "url": "https://internal-site.com/expensing"
+ "created_at": "2023-11-25T20:09:31Z",
+ },
+ {
+ "id": "expense_doc_2",
+ "title": "Expense Policies for internal travel",
+ "text": "You can expense any work-related...",
+ "created_at": "2023-11-25T20:09:31Z",
+ },...
+ ]
}
-```
+````
-The structure of the object in the results field is fully flexible, meaning any fields are allowed. However, we strongly recommend the following to improve the final generations:
+The structure of the object in the results field is fully flexible, meaning any fields are allowed. However, we strongly recommend the following to improve the final generations:
- Keep documents under 300 words or less. Or add a `text` field which is the field that is truncated when `prompt_truncation=true` in the request.
- Add a `timestamp` field to support temporal user queries.
- Add an `id` field to allow identification of the relevant document.
- Add a `title` field to allow the citations returned in the reply to be better-formatted.
- Use `excludes` to exclude fields, such as the `id`, from the prompt.
-- Add a `url` field to allow the client to link to the document.
-
+- Add a `url` field to allow the client to link to the document.
+
More details can be found in the [documentation for chat](/reference/chat).
+
#### Deploy the Connector
@@ -117,59 +119,63 @@ After you’ve deployed the connector and verified it can respond to requests, i
import cohere
# initialize the Cohere Client with an API Key
+
co = cohere.Client('YOUR_API_KEY')
created_connector = co.create_connector(
- name="Example connector",
- url="https://connector-example.com/search",
- )
-```
-```curl CURL
-curl --request POST
- --url '
- --header 'Authorization: Bearer {Cohere API key}'
- --header 'Content-Type: application/json'
- --data '{
- "name":"Example Connector",
- "description":"Description of your connector and underlying datasource",
- "url":"
+name="Example connector",
+url="https://connector-example.com/search",
+)
+
+````
+```bash CURL
+curl --request POST
+ --url '
+ --header 'Authorization: Bearer {Cohere API key}'
+ --header 'Content-Type: application/json'
+ --data '{
+ "name":"Example Connector",
+ "description":"Description of your connector and underlying datasource",
+ "url":"
}'
-```
+````
+
```typescript TYPESCRIPT
const { CohereClient } = require("cohere-ai");
const cohere = new CohereClient({
- token: "<>",
+ token: "<>",
});
(async () => {
const connector = await cohere.connectors.create({
name: "test-connector",
url: "https://example.com/search",
- description: "A test connector"
+ description: "A test connector",
});
console.log(connector);
})();
```
+
```json Example Response JSON
-{
- "connector": {
- "id": "example-connector-kh3g8q",
- "organization_id": "00000000-1111-2222-3333-444444444444",
- "name": "Example Connector",
- "url": "https://connector-example.com/search",
- "created_at": "2023-09-21T15:45:39.097677Z",
- "updated_at": "2023-09-21T16:12:28.999055Z"
- }
+{
+ "connector": {
+ "id": "example-connector-kh3g8q",
+ "organization_id": "00000000-1111-2222-3333-444444444444",
+ "name": "Example Connector",
+ "url": "https://connector-example.com/search",
+ "created_at": "2023-09-21T15:45:39.097677Z",
+ "updated_at": "2023-09-21T16:12:28.999055Z"
+ }
}
```
-Make note of the `id`, as you will need it when you want to use it later to ground model-generated responses.
+Make note of the `id`, as you will need it when you want to use it later to ground model-generated responses.
During registration, the API will attempt to query the connector to verify that it works as expected. If this step fails, ensure that the connector can respond to requests from outside your network, like so:
-```curl CURL
+```bash CURL
curl --request POST
--url 'https://connector-example.com/search'
--header 'Content-Type: application/json'
@@ -189,7 +195,8 @@ const response = await fetch('https://connector.example.com/search'{
const data = await response.json();
console.log(data);
-```
+
+````
#### Use your Connector with the Chat API
@@ -198,14 +205,15 @@ In order to produce grounded generations, include your connector id in the `conn
```python PYTHON
-import cohere
-co = cohere.Client('Your API key')
-response = co.chat(
- message="What is the chemical formula for glucose?",
- connectors=[{"id": "example_connector_id"}] # this is from the create step
+import cohere
+co = cohere.Client('Your API key')
+response = co.chat(
+ message="What is the chemical formula for glucose?",
+ connectors=[{"id": "example_connector_id"}] # this is from the create step
)
-```
-```curl CURL
+````
+
+```bash CURL
curl --request POST \
--url \
--header 'Content-Type: application/json' \
@@ -216,19 +224,21 @@ curl --request POST \
"connectors": [{"id": "example_connector_id"}]
}
```
+
```typescript TYPESCRIPT
import { CohereClient } from "cohere-ai";
const cohere = new CohereClient({
- token: "YOUR_API_KEY",
+ token: "YOUR_API_KEY",
});
(async () => {
- const response = await cohere.chat({
- message:"What is the chemical formula for glucose?",
- connectors:[{"id": "web-search"}],
- });
- console.log("Received response", response);
+ const response = await cohere.chat({
+ message: "What is the chemical formula for glucose?",
+ connectors: [{ id: "web-search" }],
+ });
+ console.log("Received response", response);
})();
```
+
```go GO
import (
cohere "github.com/cohere-ai/cohere-go/v2"
@@ -239,53 +249,54 @@ response, err := client.Chat(
context.TODO(),
&cohere.ChatRequest{
Message: "What is the chemical formula for glucose?",
- Connectors:[]*cohereclient.ChatConnector{{Id: "web-search"}},
+ Connectors:[]*cohereclient.ChatConnector{{Id: "web-search"}},
)
```
+
And here’s an example response:
```json Example Response JSON
-{
- "text": "To expense a meal, you will need to follow the guidelines set by the IRS. Firstly, the meal must not be lavish or extravagant. The taxpayer or their employee must be present when food or beverages are consumed. The food or beverages must be provided to a business associate. Most meals are deductible at 50% of the cost. Entertainment expenses are not deductible.",
- "citations": [
- {
- "start": 47,
- "end": 73,
- "text": "Expense Policy.",
- "document_ids": [
- "internal-docs_0"
- ]
- },
- ],
- "documents": [
- {
- "id": "internal-docs_0",
- "snippet": "Writing off these expenses is a smart move, but how do you go about it?\n\nThe good news is that business meals are 50 percent deductible. This means that every time you take out a client for dinner and drinks, you get to write off half of the bill.\n\nHowever, you can not go around deducting meals indiscriminately. The IRS has guidelines and tests that help you correctly deduct your meals. These include:\n\nConsidering the business context, the meal must not be lavish or extravagant.\n\nThe taxpayer or an employee of the taxpayer is present at the meal.\n\nThe expense must be an ordinary and necessary expense, under the Internal Revenue Code (IRC) Section 162(a), which is incurred or paid to carry on a trade or business.\n\nFood and beverages must be purchased separately from entertainment in the case that the entertainment activity provides food and beverages.",
- "title": "Expense Policies",
- "url": "https://internal-site.com/expensing"
- },
- ],
- "search_results": [
- {
- "search_query": {
- "text": "expense a business meal",
- ...
- },
- "document_ids": [
- "internal-docs_0",
- ...
- ],
- "connector": {
- "id": "internal-docs"
- }
- }
- ],
- "search_queries": [
- {
- "text": "expense a business meal",
- }
- ],
+{
+ "text": "To expense a meal, you will need to follow the guidelines set by the IRS. Firstly, the meal must not be lavish or extravagant. The taxpayer or their employee must be present when food or beverages are consumed. The food or beverages must be provided to a business associate. Most meals are deductible at 50% of the cost. Entertainment expenses are not deductible.",
+ "citations": [
+ {
+ "start": 47,
+ "end": 73,
+ "text": "Expense Policy.",
+ "document_ids": [
+ "internal-docs_0"
+ ]
+ },
+ ],
+ "documents": [
+ {
+ "id": "internal-docs_0",
+ "snippet": "Writing off these expenses is a smart move, but how do you go about it?\n\nThe good news is that business meals are 50 percent deductible. This means that every time you take out a client for dinner and drinks, you get to write off half of the bill.\n\nHowever, you can not go around deducting meals indiscriminately. The IRS has guidelines and tests that help you correctly deduct your meals. These include:\n\nConsidering the business context, the meal must not be lavish or extravagant.\n\nThe taxpayer or an employee of the taxpayer is present at the meal.\n\nThe expense must be an ordinary and necessary expense, under the Internal Revenue Code (IRC) Section 162(a), which is incurred or paid to carry on a trade or business.\n\nFood and beverages must be purchased separately from entertainment in the case that the entertainment activity provides food and beverages.",
+ "title": "Expense Policies",
+ "url": "https://internal-site.com/expensing"
+ },
+ ],
+ "search_results": [
+ {
+ "search_query": {
+ "text": "expense a business meal",
+ ...
+ },
+ "document_ids": [
+ "internal-docs_0",
+ ...
+ ],
+ "connector": {
+ "id": "internal-docs"
+ }
+ }
+ ],
+ "search_queries": [
+ {
+ "text": "expense a business meal",
+ }
+ ],
}
```
diff --git a/fern/pages/text-generation/connectors/managing-your-connector.mdx b/fern/pages/text-generation/connectors/managing-your-connector.mdx
index 44cbac94..a7e8c589 100644
--- a/fern/pages/text-generation/connectors/managing-your-connector.mdx
+++ b/fern/pages/text-generation/connectors/managing-your-connector.mdx
@@ -2,14 +2,15 @@
title: "Managing your Connector"
slug: "docs/managing-your-connector"
-hidden: false
+hidden: false
description: "This document provides information on managing connectors, including listing, authorizing OAuth 2.0 connectors, updating connector settings, and debugging connector issues."
-image: "../../../assets/images/330f978-cohere_meta_image.jpg"
+image: "../../../assets/images/330f978-cohere_meta_image.jpg"
keywords: "Cohere connectors, generative AI, retrieval augmented generation"
createdAt: "Fri Dec 01 2023 17:20:38 GMT+0000 (Coordinated Universal Time)"
updatedAt: "Thu May 30 2024 15:52:26 GMT+0000 (Coordinated Universal Time)"
---
+
Once your connector is deployed and registered, there are a couple of features that will help you to manage it.
### Listing your Connectors
@@ -17,43 +18,47 @@ Once your connector is deployed and registered, there are a couple of features t
You can see all the connectors registered under your organization through the [Cohere dashboard](https://dashboard.cohere.com/connectors). Alternatively, you can make a GET request like the one below:
-```curl CURL
+```bash CURL
curl --request GET
--url 'https://api.cohere.ai/v1/connectors'
--header 'Authorization: Bearer {Cohere API key}'
```
+
```typescript TYPESCRIPT
const { CohereClient } = require("cohere-ai");
const cohere = new CohereClient({
- token: "<>",
+ token: "<>",
});
(async () => {
const connectors = await cohere.connectors.list();
console.log(connectors);
})();
```
+
```python PYTHON
import cohere
# initialize the Cohere Client with an API Key
co = cohere.Client('YOUR_API_KEY')
connectors = co.list_connectors()
```
+
### Authorizing an OAuth 2.0 Connector
-If your connector is set up using OAuth 2.0, a user in your organization can authorize the connector through the dashboard by clicking on “connect your account”. Alternatively, you can make a request to the `/oauth/authorize` endpoint in your application. This will provide a redirect URL that the user can follow to authorize the OAuth application.
+If your connector is set up using OAuth 2.0, a user in your organization can authorize the connector through the dashboard by clicking on “connect your account”. Alternatively, you can make a request to the `/oauth/authorize` endpoint in your application. This will provide a redirect URL that the user can follow to authorize the OAuth application.
-```curl CURL
+```bash CURL
curl --request POST
--url 'https://api.cohere.ai/v1/connectors/{connector-id}/oauth/authorize'
--header 'Authorization: Bearer {Cohere API key for user wishing to authorize}'
```
+
```typescript TYPESCRIPT
const { CohereClient } = require("cohere-ai");
const cohere = new CohereClient({
- token: "<>",
+ token: "<>",
});
(async () => {
const connector = await cohere.connectors.oAuthAuthorize("connector-id", {
@@ -62,14 +67,15 @@ const cohere = new CohereClient({
console.log(connector);
})();
```
+
### Updating a Connector
-You can enable and disable a connector [through the dashboard](https://dashboard.cohere.com/connectors). Additionally, you can update the connector name, URL, auth settings, and handle similar sorts of tasks through the API, as follows:
+You can enable and disable a connector [through the dashboard](https://dashboard.cohere.com/connectors). Additionally, you can update the connector name, URL, auth settings, and handle similar sorts of tasks through the API, as follows:
-```curl CURL
+```bash CURL
curl --request PATCH
--url 'https://api.cohere.ai/v1/connectors/{id}'
--header 'Authorization: Bearer {Cohere API key}'
@@ -86,69 +92,76 @@ curl --request PATCH
"active": true,
}'
```
+
```python PYTHON
import cohere
# initialize the Cohere Client with an API Key
co = cohere.Client('YOUR_API_KEY')
connectors = co.update_connector(connector_id, name="new name", url="new_url")
```
+
```typescript TYPESCRIPT
const { CohereClient } = require("cohere-ai");
const cohere = new CohereClient({
- token: "<>",
+ token: "<>",
});
(async () => {
const connector = await cohere.connectors.update(connector.id, {
name: "test-connector-renamed",
- description: "A test connector renamed"
+ description: "A test connector renamed",
});
console.log(connector);
})();
```
+
### Debugging a Connector
-To debug issues with a registered connector, you can follow the steps in this section.
+To debug issues with a registered connector, you can follow the steps in this section.
Step 1: Make a streaming request to the connector using the Chat API and check the search results for the error. Here's an example request:
+
```python PYTHON
-import cohere
-co = cohere.Client('Your API key')
-response = co.chat(
- message="What is the chemical formula for glucose?",
+import cohere
+co = cohere.Client('Your API key')
+response = co.chat(
+ message="What is the chemical formula for glucose?",
stream: True,
- connectors=[{"id": "example_connector_id"}] # this is from the create step
+ connectors=[{"id": "example_connector_id"}] # this is from the create step
)
```
-```curl CURL
+
+```bash CURL
curl --request POST \
--url \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer {Your API key}' \
--data '
{
- "stream" : true,
+ "stream" : true,
"message": "What is the chemical formula for glucose?",
"connectors": [{"id": "example_connector_id"}]
}
```
+
```typescript TYPESCRIPT
import { CohereClient } from "cohere-ai";
const cohere = new CohereClient({
- token: "YOUR_API_KEY",
+ token: "YOUR_API_KEY",
});
(async () => {
- const response = await cohere.chat({
- message:"What is the chemical formula for glucose?",
- stream:True,
- connectors:[{"id": "web-search"}],
- });
- console.log("Received response", response);
+ const response = await cohere.chat({
+ message: "What is the chemical formula for glucose?",
+ stream: True,
+ connectors: [{ id: "web-search" }],
+ });
+ console.log("Received response", response);
})();
```
+
```go GO
import (
cohere "github.com/cohere-ai/cohere-go/v2"
@@ -160,20 +173,21 @@ response, err := client.Chat(
&cohere.ChatRequest{
Stream: true,
Message: "What is the chemical formula for glucose?",
- Connectors:[]*cohereclient.ChatConnector{{Id: "web-search"}},
+ Connectors:[]*cohereclient.ChatConnector{{Id: "web-search"}},
)
```
+
The response in the search results array should contain the error message from the connector:
```json Example Response JSON
- "search_results": [
- {
- "connector": {
- "id": "connector_id"
- },
- "error_message":"connector error message"
+ "search_results": [
+ {
+ "connector": {
+ "id": "connector_id"
+ },
+ "error_message":"connector error message"
}
```
diff --git a/fern/pages/text-generation/connectors/overview-1.mdx b/fern/pages/text-generation/connectors/overview-1.mdx
index f789c833..73d0ac6f 100644
--- a/fern/pages/text-generation/connectors/overview-1.mdx
+++ b/fern/pages/text-generation/connectors/overview-1.mdx
@@ -6,31 +6,33 @@ hidden: false
createdAt: "Thu May 23 2024 05:06:54 GMT+0000 (Coordinated Universal Time)"
updatedAt: "Thu May 30 2024 15:51:51 GMT+0000 (Coordinated Universal Time)"
---
+
As the name implies, Connectors are ways of connecting to data sources. They enable you to combine Cohere large language models (LLMs), which power the [Chat API endpoint](/reference/chat), with data sources such as internal documents, document databases, the broader internet, or any other source of context which can inform the replies generated by the model.
-Connectors enhance Cohere [retrieval augmented generation (RAG)](/docs/retrieval-augmented-generation-rag) offering and can respond to user questions and prompts with substantive, grounded generations that contain citations to external public or private knowledge bases. To see an example of grounded generations with citations, try out [Coral](https://coral.cohere.com/) after enabling web search grounding.
+Connectors enhance Cohere [retrieval augmented generation (RAG)](/docs/retrieval-augmented-generation-rag) offering and can respond to user questions and prompts with substantive, grounded generations that contain citations to external public or private knowledge bases. To see an example of grounded generations with citations, try out [Coral](https://coral.cohere.com/) after enabling web search grounding.
The following graphic demonstrates the flow of information when using a connector:
-
-
+
## Using Connectors to Create Grounded Generations
Connectors are specified when calling the Chat endpoint, which you can read more about [here](/docs/chat-api#connectors-mode). An example request specifying the managed web-search connector would look like this:
+
```python PYTHON
-import cohere
+import cohere
co = cohere.Client(api_key='Your API key')
response = co.chat(
model="command-r-plus"
- message="What is the chemical formula for glucose?",
- connectors=[{"id": "web-search"}]
+ message="What is the chemical formula for glucose?",
+ connectors=[{"id": "web-search"}]
)
```
-```curl CURL
+
+```bash CURL
curl --location 'https://production.api.cohere.ai/v1/chat' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer {Your API key}' \
@@ -40,19 +42,21 @@ curl --location 'https://production.api.cohere.ai/v1/chat' \
"connectors": [{"id": "web-search"}]
}
```
+
```typescript TYPESCRIPT
import { CohereClient } from "cohere-ai";
const cohere = new CohereClient({
- token: "YOUR_API_KEY",
+ token: "YOUR_API_KEY",
});
(async () => {
- const response = await cohere.chat({
- message:"What is the chemical formula for glucose?",
- connectors:[{"id": "web-search"}],
- });
- console.log("Received response", response);
+ const response = await cohere.chat({
+ message: "What is the chemical formula for glucose?",
+ connectors: [{ id: "web-search" }],
+ });
+ console.log("Received response", response);
})();
```
+
```go GO
import (
cohere "github.com/cohere-ai/cohere-go/v2"
@@ -63,9 +67,10 @@ response, err := client.Chat(
context.TODO(),
&cohere.ChatRequest{
Message: "What is the chemical formula for glucose?",
- Connectors:[]*cohereclient.ChatConnector{{Id: "web-search"}},
+ Connectors:[]*cohereclient.ChatConnector{{Id: "web-search"}},
)
```
+
If you or an administrator at your organization has created a new connector, you can add this connector id to the list. Here’s an example:
@@ -74,67 +79,63 @@ If you or an administrator at your organization has created a new connector, you
connectors=[{"id": "web-search"}, {"id": "customer-connector-id"}].
```
-The response will then contain the generated text with citation elements that link to the documents returned from the connector. For example, the formula `C6H12O6` below has a citation element that links to three websites.
+The response will then contain the generated text with citation elements that link to the documents returned from the connector. For example, the formula `C6H12O6` below has a citation element that links to three websites.
```json Example Response JSON
-{
- "text": "The chemical formula for glucose is C6H12O6.",
- "generation_id": "667f0844-e5c9-4108-8624-45b7687ca6f3",
- "citations": [
- {
- "start": 36,
- "end": 44,
- "text": "C6H12O6.",
- "document_ids": [
- "web-search_3:0",
- "web-search_3:4",
- "web-search_4:0",
- "web-search_4:1"
- ]
- }
- ],
- "documents": [
- {
- "id": "web-search_3:0",
- "snippet": "Chemical Compound Formulas\n\nGlucose is a simple sugar with six carbon atoms and one aldehyde group. This monosaccharide has a chemical formula C6H12O6.\n\nIt is also known as dextrose. It is referred to as aldohexose as it contains 6 carbon atoms and an aldehyde group. It exists in two forms, open-chain or ring structure. It is synthesized in the liver and kidneys of animals. In plants, it is found in fruits and in different parts of plants. D- glucose is the naturally occurring form of glucose. It can occur either in solid or liquid form. It is water-soluble and is also soluble in acetic acid.",
- "title": "Glucose C6H12O6 - Chemical Formula, Structure, Composition, Properties, uses and FAQs of Glucose.",
- "url": "https://byjus.com/chemistry/glucose/"
- },
- {
- "id": "web-search_3:4",
- "snippet": "\n\nFrequently Asked Questions- FAQs\n\nHow do you represent glucose?\n\nThe chemical formula of Glucose is C6H12O6. Glucose is a monosaccharide containing an aldehyde group (-CHO). It is made of 6 carbon atoms, 12 hydrogen atoms and 6 oxygen atoms. Glucose is an aldohexose.\n\nIs glucose a reducing sugar?\n\nGlucose is a reducing sugar because it belongs to the category of an aldose meaning its open-chain form contains an aldehyde group. Generally, an aldehyde is quite easily oxidized to carboxylic acids.\n\nWhat are the 5 reducing sugars?\n\nThe 5 reducing sugars are ribose, glucose, galactose, glyceraldehyde, xylose.\n\nWhat are the elements of glucose?",
- "title": "Glucose C6H12O6 - Chemical Formula, Structure, Composition, Properties, uses and FAQs of Glucose.",
- "url": "https://byjus.com/chemistry/glucose/"
- },
- {
- "id": "web-search_4:0",
- "snippet": "Science, Tech, Math › Science\n\nGlucose Molecular Formula and Facts\n\nChemical or Molecular Formula for Glucose\n\nScience Photo Library - MIRIAM MASLO. / Getty Images\n\nProjects & Experiments\n\nChemistry In Everyday Life\n\nAbbreviations & Acronyms\n\nAnne Marie Helmenstine, Ph.D.\n\nAnne Marie Helmenstine, Ph.D.\n\nPh.D., Biomedical Sciences, University of Tennessee at Knoxville\n\nB.A., Physics and Mathematics, Hastings College\n\nDr. Helmenstine holds a Ph.D. in biomedical sciences and is a science writer, educator, and consultant. She has taught science courses at the high school, college, and graduate levels.\n\nLearn about our Editorial Process\n\nUpdated on November 03, 2019\n\nThe molecular formula for glucose is C6H12O6 or H-(C=O)-(CHOH)5-H. Its empirical or simplest formula is CH2O, which indicates there are two hydrogen atoms for each carbon and oxygen atom in the molecule.",
- "title": "Know the Chemical or Molecular Formula for Glucose",
- "url": "https://www.thoughtco.com/glucose-molecular-formula-608477"
- },
- ],
- "search_results": [
- {
- "search_query": {
- "text": "chemical formula for glucose",
- "generation_id": "66e388c8-d9a8-4d43-a711-0f17c3f0f82a"
- },
- "document_ids": [
- "web-search_3:0",
- "web-search_3:4",
- "web-search_4:0",
- ],
- "connector": {
- "id": "web-search"
- }
- }
- ],
- "search_queries": [
- {
- "text": "chemical formula for glucose",
- "generation_id": "66e388c8-d9a8-4d43-a711-0f17c3f0f82a"
- }
- ]
+{
+ "text": "The chemical formula for glucose is C6H12O6.",
+ "generation_id": "667f0844-e5c9-4108-8624-45b7687ca6f3",
+ "citations": [
+ {
+ "start": 36,
+ "end": 44,
+ "text": "C6H12O6.",
+ "document_ids": [
+ "web-search_3:0",
+ "web-search_3:4",
+ "web-search_4:0",
+ "web-search_4:1"
+ ]
+ }
+ ],
+ "documents": [
+ {
+ "id": "web-search_3:0",
+ "snippet": "Chemical Compound Formulas\n\nGlucose is a simple sugar with six carbon atoms and one aldehyde group. This monosaccharide has a chemical formula C6H12O6.\n\nIt is also known as dextrose. It is referred to as aldohexose as it contains 6 carbon atoms and an aldehyde group. It exists in two forms, open-chain or ring structure. It is synthesized in the liver and kidneys of animals. In plants, it is found in fruits and in different parts of plants. D- glucose is the naturally occurring form of glucose. It can occur either in solid or liquid form. It is water-soluble and is also soluble in acetic acid.",
+ "title": "Glucose C6H12O6 - Chemical Formula, Structure, Composition, Properties, uses and FAQs of Glucose.",
+ "url": "https://byjus.com/chemistry/glucose/"
+ },
+ {
+ "id": "web-search_3:4",
+ "snippet": "\n\nFrequently Asked Questions- FAQs\n\nHow do you represent glucose?\n\nThe chemical formula of Glucose is C6H12O6. Glucose is a monosaccharide containing an aldehyde group (-CHO). It is made of 6 carbon atoms, 12 hydrogen atoms and 6 oxygen atoms. Glucose is an aldohexose.\n\nIs glucose a reducing sugar?\n\nGlucose is a reducing sugar because it belongs to the category of an aldose meaning its open-chain form contains an aldehyde group. Generally, an aldehyde is quite easily oxidized to carboxylic acids.\n\nWhat are the 5 reducing sugars?\n\nThe 5 reducing sugars are ribose, glucose, galactose, glyceraldehyde, xylose.\n\nWhat are the elements of glucose?",
+ "title": "Glucose C6H12O6 - Chemical Formula, Structure, Composition, Properties, uses and FAQs of Glucose.",
+ "url": "https://byjus.com/chemistry/glucose/"
+ },
+ {
+ "id": "web-search_4:0",
+ "snippet": "Science, Tech, Math › Science\n\nGlucose Molecular Formula and Facts\n\nChemical or Molecular Formula for Glucose\n\nScience Photo Library - MIRIAM MASLO. / Getty Images\n\nProjects & Experiments\n\nChemistry In Everyday Life\n\nAbbreviations & Acronyms\n\nAnne Marie Helmenstine, Ph.D.\n\nAnne Marie Helmenstine, Ph.D.\n\nPh.D., Biomedical Sciences, University of Tennessee at Knoxville\n\nB.A., Physics and Mathematics, Hastings College\n\nDr. Helmenstine holds a Ph.D. in biomedical sciences and is a science writer, educator, and consultant. She has taught science courses at the high school, college, and graduate levels.\n\nLearn about our Editorial Process\n\nUpdated on November 03, 2019\n\nThe molecular formula for glucose is C6H12O6 or H-(C=O)-(CHOH)5-H. Its empirical or simplest formula is CH2O, which indicates there are two hydrogen atoms for each carbon and oxygen atom in the molecule.",
+ "title": "Know the Chemical or Molecular Formula for Glucose",
+ "url": "https://www.thoughtco.com/glucose-molecular-formula-608477"
+ }
+ ],
+ "search_results": [
+ {
+ "search_query": {
+ "text": "chemical formula for glucose",
+ "generation_id": "66e388c8-d9a8-4d43-a711-0f17c3f0f82a"
+ },
+ "document_ids": ["web-search_3:0", "web-search_3:4", "web-search_4:0"],
+ "connector": {
+ "id": "web-search"
+ }
+ }
+ ],
+ "search_queries": [
+ {
+ "text": "chemical formula for glucose",
+ "generation_id": "66e388c8-d9a8-4d43-a711-0f17c3f0f82a"
+ }
+ ]
}
```
diff --git a/fern/pages/text-generation/prompt-engineering/preambles.mdx b/fern/pages/text-generation/prompt-engineering/preambles.mdx
index 9106fa11..cd869ddf 100644
--- a/fern/pages/text-generation/prompt-engineering/preambles.mdx
+++ b/fern/pages/text-generation/prompt-engineering/preambles.mdx
@@ -7,23 +7,25 @@ createdAt: "Tue Mar 12 2024 19:19:02 GMT+0000 (Coordinated Universal Time)"
updatedAt: "Thu Jun 13 2024 16:10:09 GMT+0000 (Coordinated Universal Time)"
---
-A preamble is a system message that is provided to a model at the beginning of a conversation which dictates how the model should behave throughout. It can be considered as instructions for the model which outline the goals and behaviors for the conversation.
+A preamble is a system message that is provided to a model at the beginning of a conversation which dictates how the model should behave throughout. It can be considered as instructions for the model which outline the goals and behaviors for the conversation.
-
+
## Writing a custom preamble
-While prompting is a natural way to interact with and instruct an LLM, writing a preamble is a shortcut to direct the model’s behavior. Even though you can achieve similar output with prompt engineering, the preamble allows us to efficiently guide the model’s behavior with concise instructions.
+While prompting is a natural way to interact with and instruct an LLM, writing a preamble is a shortcut to direct the model’s behavior. Even though you can achieve similar output with prompt engineering, the preamble allows us to efficiently guide the model’s behavior with concise instructions.
Default preambles differ from model to model. For example, the default preamble in the [Command R](/docs/command-r) model is
-"You are Command. You are an extremely capable large language model built by Cohere. You are given instructions programmatically via an API that you follow to the best of your ability."
+ "You are Command. You are an extremely capable large language model built by
+ Cohere. You are given instructions programmatically via an API that you follow
+ to the best of your ability."
To set a custom preamble, use the `preamble` parameter in the Chat API.
-```python Python
+```python PYTHON
co.chat(
model="",
message="Come up with a great name for a cat",
@@ -40,14 +42,14 @@ co.chat(
## Advanced Techniques for Writing a Preamble
-The Command R model responds particularly well to preambles that follow a specific structure and format. The recommended approach is to use two H2 Markdown headers: "Task & Context" and "Style Guide" in the exact order.
+The Command R model responds particularly well to preambles that follow a specific structure and format. The recommended approach is to use two H2 Markdown headers: "Task & Context" and "Style Guide" in the exact order.
- **Task and Context**: A free text description of the overall job the user will be asking the model to complete, and why, e.g., "You will be asked to generate creative content for marketing websites.”
- **Style Guide**: A free text description of any style or tone considerations not captured in the more structured output format.
Copy this template for best results in your custom preamble.
-```python Python
+```python PYTHON
preamble_template = '''
## Task & Context
@@ -65,7 +67,7 @@ co.chat(
### Example Preamble 1
-```python Python
+```python PYTHON
tour_guide_preamble = '''
## Task & Context
@@ -83,7 +85,7 @@ co.chat(
### Example Preamble 2
-```python Python
+```python PYTHON
pirate_preamble='''
## Task and Context
@@ -103,9 +105,11 @@ Failure to adhere to this format, with two specific headers denoted in Markdown
### Specifying a Date
-Adding the date to the preamble can be useful for handling time-sensitive tasks, such as "Who won the game last night?". We recommend formatting the date to be as human readable as possible. Here is an example preamble including the date:
+Adding the date to the preamble can be useful for handling time-sensitive tasks, such as "Who won the game last night?". We recommend formatting the date to be as human readable as possible. Here is an example preamble including the date:
-"You are Command. You are an extremely capable large language model built by Cohere. You are given instructions programmatically via an API that you follow to the best of your ability. Today's date is **Tuesday, March 26, 2024 11:14:59**"
+ "You are Command. You are an extremely capable large language model built by
+ Cohere. You are given instructions programmatically via an API that you follow
+ to the best of your ability. Today's date is **Tuesday, March 26, 2024
+ 11:14:59**"
-
diff --git a/fern/pages/text-generation/streaming.mdx b/fern/pages/text-generation/streaming.mdx
index 21b08007..dc42708f 100644
--- a/fern/pages/text-generation/streaming.mdx
+++ b/fern/pages/text-generation/streaming.mdx
@@ -4,12 +4,13 @@ slug: "docs/streaming"
hidden: false
description: "The document explains how the Chat API can stream events like text generation in real-time, allowing for partial results to be displayed quickly even if the full generation takes longer. It provides examples of different stream events and how to handle them in code."
-image: "../../assets/images/0b4c268-cohere_meta_image.jpg"
+image: "../../assets/images/0b4c268-cohere_meta_image.jpg"
keywords: "streaming, generative AI, text generation"
createdAt: "Thu Jun 01 2023 16:44:31 GMT+0000 (Coordinated Universal Time)"
updatedAt: "Tue Jun 18 2024 07:41:22 GMT+0000 (Coordinated Universal Time)"
---
+
The [Chat API](/reference/chat) is capable of streaming events (such as text generation) as they come. This means that partial results from the model can be displayed within moments, even if the full generation takes longer.
You're likely already familiar with streaming. When you ask the model a question using the [Coral](https://coral.cohere.com/) UI, the interface doesn't output a single block of text, instead it _streams_ the text out a few words at a time. In many user interfaces enabling streaming improves the user experience by lowering the perceived latency.
@@ -40,13 +41,13 @@ The first event in the stream contains metadata for the request such as the `gen
#### stream-end
-A `stream-end` event is the final event of the stream, and is returned only when streaming is finished. This event contains aggregated data from all the other events such as the complete `text`, as well as a `finish_reason` for why the stream ended (i.e. because of it was finished or there was an error).
+A `stream-end` event is the final event of the stream, and is returned only when streaming is finished. This event contains aggregated data from all the other events such as the complete `text`, as well as a `finish_reason` for why the stream ended (i.e. because of it was finished or there was an error).
Only one `stream-end` event will be returned.
#### text-generation
-A `text-generation` event is emitted whenever the next chunk of text comes back from the model. As the model continues generating text, multiple events of this type will be emitted.
+A `text-generation` event is emitted whenever the next chunk of text comes back from the model. As the model continues generating text, multiple events of this type will be emitted.
### Retrieval Augmented Generation Stream Events
@@ -58,13 +59,13 @@ Emitted when search queries are generated by the model. Only happens when the Ch
#### search-results
-Emitted when the specified `connectors` respond with search results. Only one event of this type will be returned for a given stream.
+Emitted when the specified `connectors` respond with search results. Only one event of this type will be returned for a given stream.
#### citation-generation
This event contains streamed citations and references to the documents being cited (if citations have been generated by the model). Multiple `citation-generation` events will be returned.
-For an illustration of a generated citation with document-specific indices, look at the "Example Response" below. As you can see, each `document` has an `id`, and when that document is used as part of the response, it's cited by that id.
+For an illustration of a generated citation with document-specific indices, look at the "Example Response" below. As you can see, each `document` has an `id`, and when that document is used as part of the response, it's cited by that id.
### Tool Use Stream Events
@@ -113,25 +114,25 @@ Emitted when the next token of the tool plan or the tool call is generated.
Emitted when the model generates tool calls that require actioning upon. The event contains a list of `tool_calls`.
-```
+```json JSON
{
- "is_finished": false,
- "event_type": "tool-calls-generation",
- "text": "I will use the reverse_string tool twice, once for each of the words provided, to find the answers.",
- "tool_calls": [
- {
- "name": "reverse_string",
- "parameters": {
- "s": "hello"
- }
- },
- {
- "name": "reverse_string",
- "parameters": {
- "s": "goodbye"
- }
- }
- ]
+ "is_finished": false,
+ "event_type": "tool-calls-generation",
+ "text": "I will use the reverse_string tool twice, once for each of the words provided, to find the answers.",
+ "tool_calls": [
+ {
+ "name": "reverse_string",
+ "parameters": {
+ "s": "hello"
+ }
+ },
+ {
+ "name": "reverse_string",
+ "parameters": {
+ "s": "goodbye"
+ }
+ }
+ ]
}
```
@@ -397,7 +398,7 @@ Below, we have a stream of events which shows the **full** output you might see
"finish_reason": "COMPLETE"
```
-It contains information about whether the streaming session is finished, what type of event is being fired, and the text that was generated by the model.
+It contains information about whether the streaming session is finished, what type of event is being fired, and the text that was generated by the model.
Of course, the `print(event.text)` and `print(event.finish_reason)` lines in the code snippet above peels a lot of the extra information away, so what your output would look more like this:
@@ -444,7 +445,7 @@ The
COMPLETE
```
-It should be (more or less) the same text, but that text is on its own rather than being accompanied by search queries, event types, etc.
+It should be (more or less) the same text, but that text is on its own rather than being accompanied by search queries, event types, etc.
Note that the citation objects in the response are returned as part of a RAG request, which you can learn more about in the [Retrieval Augmented Generation guide](/docs/retrieval-augmented-generation-rag)
diff --git a/fern/pages/text-generation/tools/multi-step-tool-use/implementing-a-multi-step-agent-with-langchain.mdx b/fern/pages/text-generation/tools/multi-step-tool-use/implementing-a-multi-step-agent-with-langchain.mdx
index 77a4a3b9..45d15b77 100644
--- a/fern/pages/text-generation/tools/multi-step-tool-use/implementing-a-multi-step-agent-with-langchain.mdx
+++ b/fern/pages/text-generation/tools/multi-step-tool-use/implementing-a-multi-step-agent-with-langchain.mdx
@@ -6,14 +6,17 @@ hidden: false
createdAt: "Mon Jun 17 2024 19:41:14 GMT+0000 (Coordinated Universal Time)"
updatedAt: "Wed Jun 19 2024 12:58:15 GMT+0000 (Coordinated Universal Time)"
---
+
In this document, we'll go through the nuts-and-bolts of building a generative-AI agent with Cohere's multi-step tool use functionality and the Langchain framework.
## Building the Langchain ReAct Agent
Multi-step tool use with Cohere can be implemented using the [Langchain framework](https://python.langchain.com/docs/integrations/providers/cohere), which conveniently comes with many pre-defined tools. More specifically, we recommend using the [ReAct](https://react-lm.github.io/) agent abstraction in Langchain, powered by `create_cohere_react_agent`. Let’s see how we can easily build an agent, using the multi-step tool use capabilities of Langchain and Cohere.
-
- The example below is also available in [this Jupyter Notebook](https://github.com/cohere-ai/notebooks/blob/main/notebooks/agents/Vanilla_Multi_Step_Tool_Use.ipynb) for convenience.
+
+ The example below is also available in [this Jupyter
+ Notebook](https://github.com/cohere-ai/notebooks/blob/main/notebooks/agents/Vanilla_Multi_Step_Tool_Use.ipynb)
+ for convenience.
First, we'll install the dependencies. (Note: the `!` is required for notebooks, but you must omit it if you're in the command line).
@@ -149,7 +152,7 @@ We can get some insight into what's going on under the hood by taking a look at
# Here is the model plan
I will search for tips on writing an essay and fun facts about the Roman Empire.
-
+
# The model decides to use a first tool: the vector store
{'tool_name': 'vectorstore_search', 'parameters': {'query': 'tips for writing an essay'}}
@@ -163,16 +166,16 @@ need [ … more of retrieved snippet 2 … ]
You don't have to get an answer right the first time, but there's
no excuse for not getting it right eventually, because [ more of retrieved snippet 3 … ]
-
+
# The model decides to use another tool: web search
{'tool_name': 'internet_search', 'parameters': {'query': 'fun facts about the roman empire'}}
# Here are the results from the web search call: retrieved passages
[{'url': 'https://www.natgeokids.com/uk/discover/history/romans/10-facts-about-the-ancient-romans/', 'content': 'i love this website\nBIG BOBBY\nbooby\nI love shell my bae;)\ni like bobby fishes ;0\nI like turtles\nOmg soy cool\ngreeeeeeeeeeeeaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaatttttttttttttttttttttttt\nbest fact ever\nthis artical is cool\nHANDY\nrubbish did not help what so ever\nha\nRocking\nTHIS IS THE BEST\nproper rad in it cool\nthis is cool\nawesomeness\nawsome\nawsome\nthank you captain\nit is a lot of help\ni like this\nwebsite it helps me on my projects and isabel likes munier\nmark uses this for research\nlot of help\nthis is awsome\nTHE BEST BOOBOO\nCool webpage helped me get 4 housepoints\n This helped me A LOT on a school project\ncool wow awesomoe\nCOOL WEBSITE LOL\nthis helped me with a school project :)\nthat was awesome\ncool\nthat helped me out for my research test\nReally its very cool really COOL\nLIKE COOL best website so far its nice\nI love it\nnice facts\nIt help with my history\n i mean u made animaljam a awesome nice safe place for kids and this site to have kids a safe website to get facts for reports and stuff\nLots of Love ,\nRose\npretty good website if u ask me\nbut definently not gonna use it on a daily basis\nIll try it again another time\ngood\nCool webcite\nterrible\nquite impressive\nAwesome website it real helps\nits good\nthis is a great website! You really a lot with my project!:)\nthis has helleped\nme get\nmy progect\ndone\nthank you\nsoooooooooooooooooo\nmuchchchchchch\nthis helleped me\nsooooooooo much with my progect thank you\nvery good website\nthank us very much your nice one today!!\n'}, {'url': 'https://ohfact.com/roman-empire-facts/', 'content': 'Learn about the ancient Roman Civilization, its history, culture, army, architecture, food and more from this list of 27 facts. Discover how the Romans started, conquered, lived, died and influenced the world with their legends, myths and facts.'}, {'url': 'https://factnight.com/fun-facts-about-the-roman-empire/', 'content': 'The Roman Empire was one of the most influential and significant civilizations in world history. At its peak, the empire stretched from North Africa to Britain, reigning over 60 million people. From its legendary beginnings and remarkable achievements to its eventual decline and fall, the Roman Empire is a fascinating topic full of little-known facts and intriguing trivia.'}, {'url': 'https://www.historyhit.com/facts-about-ancient-rome-and-the-romans/', 'content': 'The Enduring Legacy of C.S. Lewis\nMargaret J. Winkler: A Forgotten Pioneer in Disney’s Success\n10 Facts About Harper Lee\nAntarctica Expedition Cruise\nUncover Pompeii\nSophie Hay and Tristan Hughes\nRediscovering Richard III with Matt Lewis\nOrder the History Hit Miscellany\nHistory Hit Holidays\nGift Subscriptions\n100 Facts About Ancient Rome and the Romans\nRome wasn’t built in a day, as the cliché reminds us. The Crossing of the Rhine in 405/6 AD brought around 100,000 barbarians into the Empire\nBarbarian factions, tribes and war leaders were now a factor in the power struggles at the top of Roman politics and one of the once-strong boundaries of the Empire had proved to be permeable.\n Related Articles\n10 Facts About Saint Andrew\nThe Rise of Pompey the Great, the ‘Roman Alexander’\nWatch and Listen\nCleopatra\nSex in Ancient Rome\nRelated Locations\nBaelo Claudia\nMausoleum of Cecilia Metella\nColin Ricketts\n30 July 2021\n By the fourth century BC, the story was accepted by Romans who were proud of their warrior founder\nThe story was included in the first history of the city, by the Greek writer Diocles of Peparethus, and the twins and their wolf step-mother were depicted on Rome’s first coins.\n The History Hit Miscellany of Facts, Figures and Fascinating Finds\nA History of England: Part One\nDragons: Myth & Reality\nA Tudor Wonder - Hardwick Hall\nThe Battle of Shrewsbury\nEurope’s 1848 Revolutions\nThe Boston Tea Party\nHow Did 3 People Seemingly Escape From Alcatraz?\n'}, {'url': 'https://www.countryfaq.com/facts-about-the-roman-empire/', 'content': 'Facts about the Roman Empire. Explore some of the interesting, fun, cool facts bout the Roman Empire: 1. The Magnificent Roman Empire. The Roman Empire, a colossal entity of unparalleled grandeur, occupies an indomitable position within the annals of human history, a name that resonates resoundingly across the eons.'}]Relevant Documents: 0,3,4,5
-
+
# The model decides it has enough info to generate a final response.
-
+
# Below is the answer by the model
Answer: Here are some tips for writing an essay:
- Start with a question that spurs some response.
@@ -259,7 +262,7 @@ vectorstore_search = create_retriever_tool(
### Multi-turn Conversations and Chat History
-So far, we asked one-off questions to the ReAct agent. In many enterprise applications, end users want to have conversations with the ReAct agent.
+So far, we asked one-off questions to the ReAct agent. In many enterprise applications, end users want to have conversations with the ReAct agent.
The ReAct agent can handle multi-turn conversations by using `chat_history`.
@@ -296,7 +299,7 @@ response['output']
### Can the ReAct Agent Directly Answer a Question?
-Yes. The ReAct agent from Cohere comes out of the box with the ability to answer a user question directly. This happens when answering the user's question doesn’t require using a tool.
+Yes. The ReAct agent from Cohere comes out of the box with the ability to answer a user question directly. This happens when answering the user's question doesn’t require using a tool.
For example, let’s look at the following question:
@@ -311,7 +314,7 @@ By inspecting the logs, we see that the ReAct agent decided to just respond dire
````asp
> Entering new AgentExecutor chain...
Plan: I will respond to the user's greeting.
-Action: ```json JSON
+Action: ```json
[
{
"tool_name": "directly_answer",
diff --git a/fern/pages/text-generation/tools/parameter-types-in-tool-use.mdx b/fern/pages/text-generation/tools/parameter-types-in-tool-use.mdx
index 732a3780..617a1df1 100644
--- a/fern/pages/text-generation/tools/parameter-types-in-tool-use.mdx
+++ b/fern/pages/text-generation/tools/parameter-types-in-tool-use.mdx
@@ -6,6 +6,7 @@ hidden: false
createdAt: "Wed Apr 24 2024 17:31:36 GMT+0000 (Coordinated Universal Time)"
updatedAt: "Wed Apr 24 2024 18:37:19 GMT+0000 (Coordinated Universal Time)"
---
+
Cohere's tool use feature is available in the chat endpoint via the API and all of our SDKs (Python, Typescript, Java, Go). The functionality relies on Python type notation to define parameters. Parameters are the inputs that a tool or function needs to operate. With this approach there is flexibility to use any Python type as a definition for these parameters. This includes basic types like integers, floats, and strings, as well as more complex types such as lists, dictionaries, and dataclasses.
Additionally, the default value for optional parameters can be provided, which will be used if no value is specified when the function is called. It is also possible to define enumerations (enums) to specify a set of valid values for a parameter, restricting the input to a predefined list of options.
@@ -33,7 +34,7 @@ Below are some examples that illustrate how to define parameters using Python ty
## Example – Simple types
-```python Python
+```python PYTHON
tools = [
{
"name": "query_daily_sales_report",
@@ -41,7 +42,7 @@ tools = [
"parameter_definitions": {
"day": {
"description": "Retrieves sales data for this day, formatted as YYYY-MM-DD.",
- "type": "str",
+ "type": "str",
"required": True
}
}
diff --git a/fern/pages/text-generation/tools/single-step-vs-multi-step.mdx b/fern/pages/text-generation/tools/single-step-vs-multi-step.mdx
index 2b12f626..1fa3a73c 100644
--- a/fern/pages/text-generation/tools/single-step-vs-multi-step.mdx
+++ b/fern/pages/text-generation/tools/single-step-vs-multi-step.mdx
@@ -6,17 +6,18 @@ hidden: true
createdAt: "Wed May 29 2024 16:21:45 GMT+0000 (Coordinated Universal Time)"
updatedAt: "Mon Jun 17 2024 19:34:44 GMT+0000 (Coordinated Universal Time)"
---
-This guide outlines the difference in model behavior for [single-step](/docs/tool-use#step-3---the-developer-can-then-execute-the-tool-calls) and [multi-step ](/docs/multi-step-tool-use) tool use. This is a mode that can be toggled in the Chat API by providing `tools` and setting the `force_single_step` parameter.
+
+This guide outlines the difference in model behavior for [single-step](/docs/tool-use#step-3---the-developer-can-then-execute-the-tool-calls) and [multi-step ](/docs/multi-step-tool-use) tool use. This is a mode that can be toggled in the Chat API by providing `tools` and setting the `force_single_step` parameter.
## Single Step Tool Use
-Single step tool use is enabled by setting `force_single_step=True` and providing a list of tools through the Chat API. This forces the model to make just _one_ set of tool calls, after which it will provide a final answer. In single-step tool use, the model cannot make a sequence of tool calls and reason over them.
+Single step tool use is enabled by setting `force_single_step=True` and providing a list of tools through the Chat API. This forces the model to make just _one_ set of tool calls, after which it will provide a final answer. In single-step tool use, the model cannot make a sequence of tool calls and reason over them.
Note that "single _step_" doesn't mean "single _tool_." In single-step tool use the model _can_ still call multiple tools by calling them in parallel. Whet this happens, the model will output multiple tool calls in the `tool_results` part of the response.
## When Should I Use Single-step Tool Use?
-You should operate in single-step mode when you want the model to answer a question after one step. With a query like "Is there a reservation available at Johnny's Pizza or Alfredo's Pizza at 9PM," for example, the model would query the APIs for Johny's Pizza and Alfredo's Pizza, in parallel.
+You should operate in single-step mode when you want the model to answer a question after one step. With a query like "Is there a reservation available at Johnny's Pizza or Alfredo's Pizza at 9PM," for example, the model would query the APIs for Johny's Pizza and Alfredo's Pizza, in parallel.
### Code example
@@ -199,7 +200,7 @@ print(response.text)
# Final answer:
-# On the 29th of September 2023, there were 250 units sold, with a total sales amount of 10,000.
+# On the 29th of September 2023, there were 250 units sold, with a total sales amount of 10,000.
# The Electronics category contains three products:
@@ -213,13 +214,13 @@ In this step, the model cites which tool results were used to generate the final
## When Should I Use Multi-step Tool Use?
-For more complex queries, such as those that require multiple steps, it's probably better to operate in multi-step mode. You can do this by setting `enable_multistep=True` and providing a list of tools through the Chat API. In multi-step mode, the model can reason across steps and select multiple tools to answer a question completely.
+For more complex queries, such as those that require multiple steps, it's probably better to operate in multi-step mode. You can do this by setting `enable_multistep=True` and providing a list of tools through the Chat API. In multi-step mode, the model can reason across steps and select multiple tools to answer a question completely.
To illustrate, imagine you give the Chat API a query like "What was the weather where I was yesterday," along with a location tool (to return the user’s location given a timestamp) and a weather tool (to return the weather at a given location). Here's what happens:
- First, the model will make a plan, which consists in first calling the location tool (step 1), and then calling the weather tool (step 2) based on the output of the location tool.
- Then, the model receives the results of these tool calls and the underlying model's reasoning.
-- In a subsequent call, the model will determine that it still doesn’t have all the information required to answer, and select another tool.
+- In a subsequent call, the model will determine that it still doesn’t have all the information required to answer, and select another tool.
- Etc.
In the next section, there's a code snippet demonstrating how to do this programmatically.
@@ -263,7 +264,7 @@ curl --location 'https://api.cohere.ai/v1/chat' \
}'
```
-In the response below, the model selects a tool _and_ returns a message containing its reasoning.
+In the response below, the model selects a tool _and_ returns a message containing its reasoning.
```bash
{
@@ -380,7 +381,7 @@ curl --location 'https://api.cohere.ai/v1/chat' \
}'
```
-And here's the response. As you can see, the model outputs another reasoning step and another tool call request. Again, make sure to include the model reasoning step in the next request via the `chat_history`.
+And here's the response. As you can see, the model outputs another reasoning step and another tool call request. Again, make sure to include the model reasoning step in the next request via the `chat_history`.
```bash
{
@@ -557,7 +558,7 @@ curl --location 'https://api.cohere.ai/v1/chat' \
}'
```
-And in the final response, the model completes a sequence of tool calls and outputs the final message.
+And in the final response, the model completes a sequence of tool calls and outputs the final message.
```bash
{
diff --git a/fern/pages/text-generation/tools/tool-use.mdx b/fern/pages/text-generation/tools/tool-use.mdx
index 0b5d2de5..d5af67c0 100644
--- a/fern/pages/text-generation/tools/tool-use.mdx
+++ b/fern/pages/text-generation/tools/tool-use.mdx
@@ -2,7 +2,7 @@
title: "Single-Step Tool Use"
slug: "docs/tool-use"
-hidden: false
+hidden: false
description: "This document explains how tool use allows customers to connect large language models to external tools, enabling more advanced interactions beyond simple chat. Customers can provide tools to the model, which then dynamically selects the right tools to call based on user messages, and customers can execute these tool calls to enhance the model's responses."
image: "../../../assets/images/39c2d8c-cohere_meta_image.jpg"
keywords: "natural language processing, Cohere, large language models, tool use with LLMs, generative AI tool use"
@@ -10,6 +10,7 @@ keywords: "natural language processing, Cohere, large language models, tool use
createdAt: "Thu Feb 29 2024 18:14:38 GMT+0000 (Coordinated Universal Time)"
updatedAt: "Mon Jun 17 2024 19:35:37 GMT+0000 (Coordinated Universal Time)"
---
+
Tool use is a technique which allows developers to connect Cohere's Command family of models to external tools like search engines, APIs, functions, databases, etc. It comes in two variants, single-step and multi-step, and this doc will focus on single-step tool use.
Single-step tool use enables a richer set of behaviors by leveraging data stored in tools, taking actions through APIs, interacting with a vector database, querying a search engine, etc. To use it, set `force_single_step=True` when creating the model.
@@ -20,7 +21,7 @@ Check out [this notebook](https://github.com/cohere-ai/notebooks/blob/main/noteb
## What Is Possible with Single-Step Tool Use?
-Single-step tool use (or “function calling”) opens up a wide range of new use cases. Below, we walk through a few examples.
+Single-step tool use (or “function calling”) opens up a wide range of new use cases. Below, we walk through a few examples.
It's now possible to reliably ask the model to recommend a tool (or set of tools) to use and offer advice on how to use them, which you can pass back to the model for more flexible workflows. Tool use allows your chatbot to interact with your CRM to change the status of a deal, for example, or to engage with a Python interpreter to conduct data science analysis.
@@ -36,8 +37,7 @@ We want to stress that it's the _developers_ executing tool calls and submitting
Here's a graphic that represents the four steps discussed below:
-
-
+
Feel free to refer back to it as you read on.
@@ -247,7 +247,7 @@ print(response.text)
# Final answer:
-# On the 29th of September 2023, there were 250 units sold, with a total sales amount of 10,000.
+# On the 29th of September 2023, there were 250 units sold, with a total sales amount of 10,000.
# The Electronics category contains three products:
@@ -297,7 +297,7 @@ for cite in response.citations:
There may be times when a single-step tool use query doesn't turn up a tool that will help answer the question. In those scenarios the model will return an empty list of `tool_calls`, along with an empty string in the `text` field. In that case, call the Chat API again with an empty list of `tool_results`
-```python python
+```python PYTHON
res = co.chat(message=message, tools=tools)
if not response.tool_calls:
@@ -321,7 +321,7 @@ To get good answers with single-step tool use, make sure that the tool name and
When you pass the tool results back to the model make sure that they are structured in a comprehensive way. For example, if you are passing the results of a `add_numbers` function:
-```
+```python PYTHON
outputs = [{"number": 2343}] # Not Great
outputs = [{"sum": 2343}] # Better
```
@@ -334,7 +334,7 @@ Here, we'll preview some of the functionality we plan on adding in the coming mo
The model can currently handle any tool provided by the developer. That having been said, Cohere has implemented some pre-defined tools that users can leverage out-of-the-box.
-Specifically we're going to roll out a **Python interpreter** tool and a **Web search** tool.
+Specifically we're going to roll out a **Python interpreter** tool and a **Web search** tool.
Please [reach out](mailto:MAXIMEVOISIN@COHERE.COM) to join the beta.
@@ -350,7 +350,7 @@ You should operate in single-step mode when you want the model to answer a quest
### If I provide many tools to the model, will the model ignore the tools that aren’t useful for the user message?
-- Yes. The model has the ability of assessing the value of a given tool in answering a given query, and will ignore any (and all) tools that don't serve that purpose.
+- Yes. The model has the ability of assessing the value of a given tool in answering a given query, and will ignore any (and all) tools that don't serve that purpose.
### If I provide many tools to the model, can the model call each tool multiple times?
@@ -364,11 +364,10 @@ You should operate in single-step mode when you want the model to answer a quest
### Why is the output of a tool a list of objects?
- Some tools (such as search for example) might produce many different documents (eg: search results). In order for the model to cite the documents individually when generating the response, the output has to be a list of objects. If your tool returns a single object, wrap it in a list. For example:
- ```
+ ```python PYTHON
outputs=[{"sum": 25}]
```
Are there any other caveats I should be aware of?
- Yes. An important one is that the model may return tool parameters that are invalid, so be sure to give everything a thorough once-over.
-