From 42e1f97494939e8aee27b980bea7417ce2b3339f Mon Sep 17 00:00:00 2001
From: fern <126544928+fern-bot@users.noreply.github.com>
Date: Mon, 12 Aug 2024 09:19:56 -0400
Subject: [PATCH] adding formatting (#20)
---
fern/assets/input.css | 18 ++++
.../chat-starting-the-training.mdx | 2 +-
.../integrations/chroma-and-cohere.mdx | 4 +-
.../integrations/elasticsearch-and-cohere.mdx | 4 +-
.../integrations/haystack-and-cohere.mdx | 2 +-
.../integrations/milvus-and-cohere.mdx | 4 +-
.../integrations/mongodb-and-cohere.mdx | 4 +-
.../integrations/opensearch-and-cohere.mdx | 2 +-
.../integrations/pinecone-and-cohere.mdx | 4 +-
.../integrations/qdrant-and-cohere.mdx | 4 +-
.../integrations/redis-and-cohere.mdx | 4 +-
.../integrations/vespa-and-cohere.mdx | 4 +-
.../integrations/weaviate-and-cohere.mdx | 4 +-
.../integrations/zilliz-and-cohere.mdx | 4 +-
.../intro-semantic-search/keyword-search.mdx | 2 +-
fern/pages/models/cohere-embed.mdx | 8 +-
fern/pages/text-embeddings/embeddings.mdx | 2 +-
...advanced-prompt-engineering-techniques.mdx | 94 +++++++++----------
.../crafting-effective-prompts.mdx | 40 ++++----
.../text-generation/tokens-and-tokenizers.mdx | 2 +-
fern/pages/text-generation/tools/tool-use.mdx | 2 +-
21 files changed, 116 insertions(+), 98 deletions(-)
diff --git a/fern/assets/input.css b/fern/assets/input.css
index 10f45414..5e9ee2cb 100644
--- a/fern/assets/input.css
+++ b/fern/assets/input.css
@@ -354,6 +354,16 @@ button[class^="Sidebar-link-buttonWrapper"] {
flex-direction: column;
}
+.fern-header .fern-button-text {
+ font-family: 'CohereText',Segoe UI,Roboto,Oxygen,Ubuntu,Cantarell,Open Sans,Helvetica Neue,sans-serif !important;
+ -webkit-font-smoothing: unset !important;
+}
+
+.fern-sidebar-content .fern-button-text {
+ font-family: 'CohereText',Segoe UI,Roboto,Oxygen,Ubuntu,Cantarell,Open Sans,Helvetica Neue,sans-serif !important;
+ -webkit-font-smoothing: unset !important;
+}
+
#content {
padding: 0 32px;
}
@@ -459,3 +469,11 @@ button[class^="Sidebar-link-buttonWrapper"] {
float: right !important;
margin-left: .75rem !important;
}
+
+.light-bg {
+ background-color: #E8E6DE !important;
+ border-color: #e0e0e0 !important;
+ border-radius: .5rem !important;
+ border-width: 1px !important;
+ padding: 15px !important;
+}
\ No newline at end of file
diff --git a/fern/pages/fine-tuning/chat-fine-tuning/chat-starting-the-training.mdx b/fern/pages/fine-tuning/chat-fine-tuning/chat-starting-the-training.mdx
index 567218c9..7dd67e64 100644
--- a/fern/pages/fine-tuning/chat-fine-tuning/chat-starting-the-training.mdx
+++ b/fern/pages/fine-tuning/chat-fine-tuning/chat-starting-the-training.mdx
@@ -241,7 +241,7 @@ By passing `return_prompt=True` in any message, you can see which preamble is be
Here's a Python script to make this clearer:
-```python python
+```python PYTHON
import cohere
co = cohere.Client('Your API key')
diff --git a/fern/pages/integrations/integrations/chroma-and-cohere.mdx b/fern/pages/integrations/integrations/chroma-and-cohere.mdx
index d90c3b2f..568f429c 100644
--- a/fern/pages/integrations/integrations/chroma-and-cohere.mdx
+++ b/fern/pages/integrations/integrations/chroma-and-cohere.mdx
@@ -7,12 +7,12 @@ createdAt: "Thu May 23 2024 16:53:24 GMT+0000 (Coordinated Universal Time)"
updatedAt: "Thu May 23 2024 16:53:54 GMT+0000 (Coordinated Universal Time)"
---
-
+
Chroma is an open-source vector search engine that's quick to install and start building with using Python or Javascript.
diff --git a/fern/pages/integrations/integrations/elasticsearch-and-cohere.mdx b/fern/pages/integrations/integrations/elasticsearch-and-cohere.mdx
index 228f1f16..9250a0d8 100644
--- a/fern/pages/integrations/integrations/elasticsearch-and-cohere.mdx
+++ b/fern/pages/integrations/integrations/elasticsearch-and-cohere.mdx
@@ -11,7 +11,7 @@ createdAt: "Sun Apr 07 2024 20:15:08 GMT+0000 (Coordinated Universal Time)"
updatedAt: "Thu May 30 2024 15:56:35 GMT+0000 (Coordinated Universal Time)"
---
-
+
[Elasticsearch](https://www.elastic.co/search-labs/blog/elasticsearch-cohere-embeddings-support) has all the tools developers need to build next generation search experiences with generative AI, and it supports native integration with [Cohere](https://www.elastic.co/search-labs/blog/elasticsearch-cohere-embeddings-support) through their [inference API](https://www.elastic.co/guide/en/elasticsearch/reference/master/semantic-search-inference.html).
@@ -34,7 +34,7 @@ This guide uses a dataset of Wikipedia articles to set up a pipeline for semanti
To see the full code sample, refer to this [notebook](https://github.com/cohere-ai/notebooks/blob/main/notebooks/Cohere_Elastic_Guide.ipynb).
diff --git a/fern/pages/integrations/integrations/haystack-and-cohere.mdx b/fern/pages/integrations/integrations/haystack-and-cohere.mdx
index 36b50ed2..19fd386d 100644
--- a/fern/pages/integrations/integrations/haystack-and-cohere.mdx
+++ b/fern/pages/integrations/integrations/haystack-and-cohere.mdx
@@ -11,7 +11,7 @@ createdAt: "Tue Feb 27 2024 20:06:57 GMT+0000 (Coordinated Universal Time)"
updatedAt: "Thu May 23 2024 17:07:17 GMT+0000 (Coordinated Universal Time)"
---
-
+
[Haystack](https://github.com/deepset-ai/haystack) is an open source LLM framework in Python by [deepset](https://www.deepset.ai/) for building customizable, production-ready LLM applications. You can use Cohere's `/embed`, `/generate`, `/chat`, and `/rerank` models with Haystack.
diff --git a/fern/pages/integrations/integrations/milvus-and-cohere.mdx b/fern/pages/integrations/integrations/milvus-and-cohere.mdx
index ffc99e73..a1edd7fc 100644
--- a/fern/pages/integrations/integrations/milvus-and-cohere.mdx
+++ b/fern/pages/integrations/integrations/milvus-and-cohere.mdx
@@ -7,14 +7,14 @@ createdAt: "Thu May 23 2024 16:59:08 GMT+0000 (Coordinated Universal Time)"
updatedAt: "Thu May 23 2024 16:59:13 GMT+0000 (Coordinated Universal Time)"
---
-
+
[Milvus](https://milvus.io/) is a highly flexible, reliable, and blazing-fast cloud-native, open-source vector database. It powers embedding similarity search and AI applications and strives to make vector databases accessible to every organization. Milvus is a graduated-stage project of the LF AI & Data Foundation.
The following [guide](https://milvus.io/docs/integrate_with_cohere.md) walks through how to integrate [Cohere embeddings](/docs/embeddings) with Milvus.
diff --git a/fern/pages/integrations/integrations/mongodb-and-cohere.mdx b/fern/pages/integrations/integrations/mongodb-and-cohere.mdx
index 97248f9f..eed0d377 100644
--- a/fern/pages/integrations/integrations/mongodb-and-cohere.mdx
+++ b/fern/pages/integrations/integrations/mongodb-and-cohere.mdx
@@ -11,14 +11,14 @@ createdAt: "Thu May 23 2024 16:41:27 GMT+0000 (Coordinated Universal Time)"
updatedAt: "Thu May 23 2024 17:06:29 GMT+0000 (Coordinated Universal Time)"
---
-
+
MongoDB Atlas Vector Search is a fully managed vector search platform from MongoDB. It can be used with Cohere's Embed and Rerank models to easily build semantic search or retrieval-augmented generation (RAG) systems with your data from MongoDB.
The following guide walks through how to integrate Cohere models with MongoDB Atlas Vector Search.
diff --git a/fern/pages/integrations/integrations/opensearch-and-cohere.mdx b/fern/pages/integrations/integrations/opensearch-and-cohere.mdx
index 83e6ab5f..32c517e2 100644
--- a/fern/pages/integrations/integrations/opensearch-and-cohere.mdx
+++ b/fern/pages/integrations/integrations/opensearch-and-cohere.mdx
@@ -10,7 +10,7 @@ keywords: "OpenSearch, Cohere"
createdAt: "Fri Feb 02 2024 15:17:19 GMT+0000 (Coordinated Universal Time)"
updatedAt: "Thu May 23 2024 17:09:12 GMT+0000 (Coordinated Universal Time)"
---
-
+
[OpenSearch](https://opensearch.org/platform/search/vector-database.html) is an open-source, distributed search and analytics engine platform that allows users to search, analyze, and visualize large volumes of data in real time. When it comes to text search, OpenSearch is well-known for powering keyword-based (also called lexical) search methods. OpenSearch supports Vector Search and integrates with Cohere through [3rd-Party ML Connectors](https://opensearch.org/docs/latest/ml-commons-plugin/remote-models/connectors/) as well as Amazon Bedrock
diff --git a/fern/pages/integrations/integrations/pinecone-and-cohere.mdx b/fern/pages/integrations/integrations/pinecone-and-cohere.mdx
index 0b749062..197ec2fe 100644
--- a/fern/pages/integrations/integrations/pinecone-and-cohere.mdx
+++ b/fern/pages/integrations/integrations/pinecone-and-cohere.mdx
@@ -7,12 +7,12 @@ createdAt: "Thu May 23 2024 16:56:18 GMT+0000 (Coordinated Universal Time)"
updatedAt: "Thu May 23 2024 16:57:06 GMT+0000 (Coordinated Universal Time)"
---
-
+
The [Pinecone](https://www.pinecone.io/) vector database makes it easy to build high-performance vector search applications. Use Cohere to generate language embeddings, then store them in Pinecone and use them for Semantic Search.
diff --git a/fern/pages/integrations/integrations/qdrant-and-cohere.mdx b/fern/pages/integrations/integrations/qdrant-and-cohere.mdx
index eba83a7c..c3e67029 100644
--- a/fern/pages/integrations/integrations/qdrant-and-cohere.mdx
+++ b/fern/pages/integrations/integrations/qdrant-and-cohere.mdx
@@ -7,14 +7,14 @@ createdAt: "Thu May 23 2024 16:54:52 GMT+0000 (Coordinated Universal Time)"
updatedAt: "Thu May 23 2024 16:55:09 GMT+0000 (Coordinated Universal Time)"
---
-
+
[Qdrant](https://qdrant.tech/) is an open-source vector similarity search engine and vector database. It provides a production-ready service with a convenient API to store, search, and manage points - vectors with an additional payload. Qdrant is tailored to extended filtering support. It makes it useful for all sorts of neural-network or semantic-based matching, faceted search, and other applications.
Qdrant is written in Rust, which makes it fast and reliable even under high load.
diff --git a/fern/pages/integrations/integrations/redis-and-cohere.mdx b/fern/pages/integrations/integrations/redis-and-cohere.mdx
index fb1d4188..494414a0 100644
--- a/fern/pages/integrations/integrations/redis-and-cohere.mdx
+++ b/fern/pages/integrations/integrations/redis-and-cohere.mdx
@@ -11,7 +11,7 @@ createdAt: "Mon Feb 26 2024 22:22:44 GMT+0000 (Coordinated Universal Time)"
updatedAt: "Thu May 23 2024 17:06:49 GMT+0000 (Coordinated Universal Time)"
---
-
+
[RedisVL](https://www.redisvl.com/) provides a powerful, dedicated Python client library for using Redis as a Vector Database. This walks through how to integrate [Cohere embeddings](/docs/embeddings) with Redis using a dataset of Wikipedia articles to set up a pipeline for semantic search. It will cover:
@@ -23,7 +23,7 @@ updatedAt: "Thu May 23 2024 17:06:49 GMT+0000 (Coordinated Universal Time)"
To see the full code sample, refer to this [notebook](https://github.com/cohere-ai/notebooks/blob/main/notebooks/Cohere_Redis_Guide.ipynb).
diff --git a/fern/pages/integrations/integrations/vespa-and-cohere.mdx b/fern/pages/integrations/integrations/vespa-and-cohere.mdx
index 600a2619..268fe80b 100644
--- a/fern/pages/integrations/integrations/vespa-and-cohere.mdx
+++ b/fern/pages/integrations/integrations/vespa-and-cohere.mdx
@@ -7,12 +7,12 @@ createdAt: "Thu May 23 2024 16:52:12 GMT+0000 (Coordinated Universal Time)"
updatedAt: "Thu May 23 2024 16:52:39 GMT+0000 (Coordinated Universal Time)"
---
-
+
[Vespa](https://vespa.ai/) is a fully featured search engine and vector database. It supports vector search (ANN), lexical search, and search in structured data, all in the same query. Integrated machine-learned model inference allows you to apply AI to make sense of your data in real time.
diff --git a/fern/pages/integrations/integrations/weaviate-and-cohere.mdx b/fern/pages/integrations/integrations/weaviate-and-cohere.mdx
index 6baca405..4d97097c 100644
--- a/fern/pages/integrations/integrations/weaviate-and-cohere.mdx
+++ b/fern/pages/integrations/integrations/weaviate-and-cohere.mdx
@@ -7,14 +7,14 @@ createdAt: "Thu May 23 2024 16:55:16 GMT+0000 (Coordinated Universal Time)"
updatedAt: "Thu May 23 2024 16:56:09 GMT+0000 (Coordinated Universal Time)"
---
-
+
[Weaviate](https://weaviate.io/) is an open source vector search engine that stores both objects and vectors, allowing for combining vector search with structured filtering.
The `text2vec-cohere` module allows you to use [Cohere embeddings](/docs/embeddings) directly in the Weaviate vector search engine as a vectorization module.
diff --git a/fern/pages/integrations/integrations/zilliz-and-cohere.mdx b/fern/pages/integrations/integrations/zilliz-and-cohere.mdx
index 35921934..9c035220 100644
--- a/fern/pages/integrations/integrations/zilliz-and-cohere.mdx
+++ b/fern/pages/integrations/integrations/zilliz-and-cohere.mdx
@@ -7,14 +7,14 @@ createdAt: "Thu May 23 2024 17:00:11 GMT+0000 (Coordinated Universal Time)"
updatedAt: "Thu May 23 2024 20:28:12 GMT+0000 (Coordinated Universal Time)"
---
-
+
[Zilliz Cloud](https://zilliz.com/cloud) is a cloud-native vector database that stores, indexes, and searches for billions of embedding vectors to power enterprise-grade similarity search, recommender systems, anomaly detection, and more. Zilliz Cloud provides a fully-managed Milvus service, made by the creators of Milvus that allows for easy integration with vectorizers from Cohere and other popular models. Purpose-built to solve the challenge of managing billions of embeddings, Zilliz Cloud makes it easy to build applications for scale.
The following [guide](https://docs.zilliz.com/docs/question-answering-using-zilliz-cloud-and-cohere) walks through how to integrate [Cohere embeddings](/docs/embeddings) with Zilliz.
diff --git a/fern/pages/llm-university/intro-semantic-search/keyword-search.mdx b/fern/pages/llm-university/intro-semantic-search/keyword-search.mdx
index 8f25411a..d0cfa817 100644
--- a/fern/pages/llm-university/intro-semantic-search/keyword-search.mdx
+++ b/fern/pages/llm-university/intro-semantic-search/keyword-search.mdx
@@ -19,7 +19,7 @@ This chapter comes with a corresponding [Colab notebook](https://colab.research.
In order to store the Wikipedia dataset query, we’ll use the [Weaviate vector database](https://weaviate.io/?ref=txt.cohere.com), which will give us a range of benefits. In simple terms, a vector database is a place where one can store data objects and vector embeddings, and be able to access them and perform operations easily. For example, finding the nearest neighbors of a vector in a dataset is a lengthy process, which is sped up significantly by using a vector database. This is done with the following code.
-```python python
+```python PYTHON
import weaviate
import cohere
diff --git a/fern/pages/models/cohere-embed.mdx b/fern/pages/models/cohere-embed.mdx
index dbbb5066..a4d46946 100644
--- a/fern/pages/models/cohere-embed.mdx
+++ b/fern/pages/models/cohere-embed.mdx
@@ -16,8 +16,8 @@ Embed models can be used to generate embeddings from text or classify it based o
| Latest Model | Description | Dimensions | Max Tokens (Context Length) | Similarity Metric | Endpoints |
|-----------------------------|------------------------------------------------------------------------------------------------------|------------|-----------------------------|-------------------|-------------------------------------------------------------------------------------------|
-| `embed-english-v3.0` | A model that allows for text to be classified or turned into embeddings. English only. | 1024 | 512 | Cosine Similarity | [Embed](/reference/embed),
[Embed Jobs](/reference/embed-jobs) |
-| `embed-english-light-v3.0` | A smaller, faster version of `embed-english-v3.0`. Almost as capable, but a lot faster. English only.| 384 | 512 | Cosine Similarity | [Embed](/reference/embed),
[Embed Jobs](/reference/embed-jobs) |
+| `embed-english-v3.0` | A model that allows for text to be classified or turned into embeddings. English only. | 1024 | 512 | Cosine Similarity, Dot Product Similarity, Euclidean Distance | [Embed](/reference/embed),
[Embed Jobs](/reference/embed-jobs) |
+| `embed-english-light-v3.0` | A smaller, faster version of `embed-english-v3.0`. Almost as capable, but a lot faster. English only.| 384 | 512 | Cosine Similarity, Dot Product Similarity, Euclidean Distance | [Embed](/reference/embed),
[Embed Jobs](/reference/embed-jobs) |
| `embed-english-v2.0` | Our older embeddings model that allows for text to be classified or turned into embeddings. English only | 4096 | 512 | Cosine Similarity | [Classify](/reference/classify), [Embed](/reference/embed) |
| `embed-english-light-v2.0` | A smaller, faster version of embed-english-v2.0. Almost as capable, but a lot faster. English only. | 1024 | 512 | Cosine Similarity | [Classify](/reference/classify), [Embed](/reference/embed) |
@@ -27,8 +27,8 @@ Embed models can be used to generate embeddings from text or classify it based o
| Latest Model | Description | Dimensions | Max Tokens (Context Length) | Similarity Metric | Endpoints |
|----------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------|------------|-----------------------------|-------------------------|---------------------------------------------------------------------------------------------------|
-| `embed-multilingual-v3.0` | Provides multilingual classification and embedding support. [See supported languages here.](/docs/supported-languages) | 1024 | 512 | Cosine Similarity | [Embed](/reference/embed), [Embed Jobs](/reference/embed-jobs) |
-| `embed-multilingual-light-v3.0` | A smaller, faster version of `embed-multilingual-v3.0`. Almost as capable, but a lot faster. Supports multiple languages. | 384 | 512 | Cosine Similarity | [Embed](/reference/embed),
[Embed Jobs](/reference/embed-jobs) |
+| `embed-multilingual-v3.0` | Provides multilingual classification and embedding support. [See supported languages here.](/docs/supported-languages) | 1024 | 512 | Cosine Similarity, Dot Product Similarity, Euclidean Distance | [Embed](/reference/embed), [Embed Jobs](/reference/embed-jobs) |
+| `embed-multilingual-light-v3.0` | A smaller, faster version of `embed-multilingual-v3.0`. Almost as capable, but a lot faster. Supports multiple languages. | 384 | 512 | Cosine Similarity, Dot Product Similarity, Euclidean Distance | [Embed](/reference/embed),
[Embed Jobs](/reference/embed-jobs) |
| `embed-multilingual-v2.0` | Provides multilingual classification and embedding support. [See supported languages here.](/docs/supported-languages) | 768 | 256 | Dot Product Similarity | [Classify](/reference/classify), [Embed](/reference/embed) |
diff --git a/fern/pages/text-embeddings/embeddings.mdx b/fern/pages/text-embeddings/embeddings.mdx
index 929e1a07..687a77dd 100644
--- a/fern/pages/text-embeddings/embeddings.mdx
+++ b/fern/pages/text-embeddings/embeddings.mdx
@@ -52,7 +52,7 @@ Cohere embeddings are optimized for different types of inputs. For example, when
In addition to `embed-english-v3.0` we offer a best-in-class multilingual model [embed-multilingual-v3.0](/docs/embed-2#multi-lingual-models) with support for over 100 languages, including Chinese, Spanish, and French. This model can be used with the Embed API, just like its English counterpart:
-```python python
+```python PYTHON
import cohere
co = cohere.Client(api_key="")
diff --git a/fern/pages/text-generation/prompt-engineering/advanced-prompt-engineering-techniques.mdx b/fern/pages/text-generation/prompt-engineering/advanced-prompt-engineering-techniques.mdx
index ddb3f335..bc4f6ce3 100644
--- a/fern/pages/text-generation/prompt-engineering/advanced-prompt-engineering-techniques.mdx
+++ b/fern/pages/text-generation/prompt-engineering/advanced-prompt-engineering-techniques.mdx
@@ -14,7 +14,7 @@ As we develop these techniques, we will work through an example where our aim is
Q: On the issue of whether David is fast, the fact that David set a high school track record.
Is there hearsay?
A: No
-
+
Q: On the issue of whether Rebecca was ill, the fact that Rebecca told Ronald that she was unwell.
Is there hearsay?
A: Yes
@@ -26,14 +26,14 @@ Before we apply any specific prompting techniques, we can see that simply prompt
[prompt]
On the issue of Albert's wellbeing after the accident, Angela testified
that he gave a thumbs up when asked how he was feeling. Is there hearsay?
-
+
[completion]
No, there is no hearsay. Hearsay is a statement that is repeated by someone
other than the person who originally made the statement. Angela's testimony
is not hearsay because she is testifying about her own personal knowledge of
the event; she saw Albert give a thumbs up sign. This is considered direct evidence
and would likely be admissible in court.
-
+
The thumbs up sign can be interpreted as a current state of well-being and
could hold some weight in refuting concerns about Albert's health after the
accident. However, the defense could argue that a thumbs up sign is not a
@@ -71,10 +71,10 @@ The obvious thing missing in the prompt above is concise and unambiguous instruc
[prompt]
Hearsay is an out-of-court statement introduced to prove the truth of the matter
asserted. Answer the following question regarding hearsay with either Yes or No.
-
+
On the issue of Albert's wellbeing after the accident, Angela testified that he
gave a thumbs up when asked how he was feeling. Is there hearsay?
-
+
[completion]
Yes.
@@ -93,35 +93,35 @@ To improve the above question, we can include several positive and negative exam
```
[prompt]
Hearsay is an out-of-court statement introduced to prove the truth of the matter asserted.
-
+
Q: On the issue of whether David is fast, the fact that David set a high school track record.
Is there hearsay?
A: No
-
+
Q: On the issue of whether Rebecca was ill, the fact that Rebecca told Ronald that she was unwell.
Is there hearsay?
A: Yes
-
+
Q: To prove that Tim was a soccer fan, the fact that Tim told Jimmy that "Real Madrid was the best
soccer team in the world." Is there hearsay?
A: No
-
+
Q: When asked by the attorney on cross-examination, Alice testified that she had "never seen the
plaintiff before, and had no idea who she was." Is there hearsay?
A: No
-
+
Q: On the issue of whether Martin punched James, the fact that Martin smiled and nodded when asked
if he did so by an officer on the scene. Is there hearsay?
A: Yes
-
+
On the issue of Albert's wellbeing after the accident, Angela testified that he gave a thumbs up
when asked how he was feeling. Is there hearsay?
A:
-
+
[completion]
Yes, Angela's testimony about Albert's non-verbal conduct is hearsay, as it's an out-of-court
statement (Albert's thumbs up) being used to prove the truth of the matter asserted (that Albert was feeling well).
-
+
Hearsay is defined as an out-of-court statement made by someone other than the witness testifying,
and it's offered to prove the truth of the content of the statement. In this scenario, Albert's
gesture is being used as evidence of his wellbeing, making it hearsay under the legal
@@ -142,26 +142,26 @@ There are several different ways to incorporate CoT prompting into a task. With
```
[prompt]
Hearsay is an out-of-court statement introduced to prove the truth of the matter asserted.
-
+
Q: On the issue of Albert's wellbeing after the accident, Angela testified that he gave a
thumbs up when asked how he was feeling. Is there hearsay?
A: Work through the problem step by step first and then answer the question.
-
+
[completion]
Yes, let's work through this scenario step by step to identify if there's hearsay involved.
-
+
1. Angela's Testimony: According to Angela's testimony, Albert gave a "thumbs up" when asked
about his well-being. This statement was made by Angela in court, which means it's in-court testimony.
-
+
2. Nature of the Statement: Angela's statement about Albert's "thumbs up" gesture is being
used to imply that Albert is doing well or feeling good after the accident.
-
+
3. Out-of-Court Statement: The actual statement being discussed here - "I'm feeling good" - was
allegedly made by Albert out of court. This is a classic example of an out-of-court statement.
-
+
4. Purpose of the Statement: Angela's testimony, including the reference to Albert's gesture,
is being used to prove Albert's subjective state of well-being.
-
+
Therefore, yes, there is hearsay in Angela's testimony. The statement made by Albert, describing
his feelings, is hearsay because it's an out-of-court assertion being used to prove the truth
of the matter - in this case, that Albert was feeling well after the accident.
@@ -173,40 +173,40 @@ This answer is quite satisfying not only because we get the correct answer, but
```
[prompt]
Hearsay is an out-of-court statement introduced to prove the truth of the matter asserted.
-
+
Q: On the issue of whether David is fast, the fact that David set a high school track record.
Is there hearsay?
Reasoning: The statement is non-assertive conduct.
A: No
-
+
Q: On the issue of whether Rebecca was ill, the fact that Rebecca told Ronald that she was
unwell. Is there hearsay?
Reasoning: The statement is standard hearsay.
A: Yes
-
+
Q: To prove that Tim was a soccer fan, the fact that Tim told Jimmy that "Real Madrid was
the best soccer team in the world." Is there hearsay?
Reasoning: The statement is not introduced to prove truth.
A: No
-
+
Q: When asked by the attorney on cross-examination, Alice testified that she had "never seen
the plaintiff before, and had no idea who she was." Is there hearsay?
Reasoning: The statement was made in-court.
A: No
-
+
Q: On the issue of whether Martin punched James, the fact that Martin smiled and nodded when
asked if he did so by an officer on the scene. Is there hearsay?
Reasoning: The statement is non-verbal hearsay.
A: Yes
-
+
Q: On the issue of Albert's wellbeing after the accident, Angela testified that he gave a
thumbs up when asked how he was feeling. Is there hearsay?
-
+
[completion]
Yes. Angela's testimony about Albert's non-verbal conduct, i.e., giving a "thumbs up," falls
under the definition of hearsay, as it is an out-of-court statement (Albert's gesture) used
to prove the truth of the matter asserted (that Albert was doing well after the accident).
-
+
Hearsay is typically defined as an out-of-court statement offered in court for the truth of
the matter asserted. Non-verbal conduct, such as gestures or nods, can indeed constitute
hearsay if they are intended to assert a fact. In this scenario, Albert's thumbs up is
@@ -222,49 +222,49 @@ Hearsay is an out-of-court statement introduced to prove the truth of the matter
Given a statement, respond with a JSON object that has two keys: `reasoning` and `answer`.
The first key's value should contain the reasoning used to arrive at the answer.
The second key's value should contain either "Yes" or "No".
-
+
Q: On the issue of whether David is fast, the fact that David set a high school track record.
Is there hearsay?
{
"reasoning": "The statement is non-assertive conduct."
"answer": "No"
}
-
+
Q: On the issue of whether Rebecca was ill, the fact that Rebecca told Ronald that she was
unwell. Is there hearsay?
{
"reasoning": "The statement is standard hearsay."
"answer": "Yes"
}
-
+
Q: To prove that Tim was a soccer fan, the fact that Tim told Jimmy that "Real Madrid was
the best soccer team in the world." Is there hearsay?
{
"reasoning": "The statement is not introduced to prove truth."
"answer": "No"
}
-
+
Q: When asked by the attorney on cross-examination, Alice testified that she had "never seen
the plaintiff before, and had no idea who she was." Is there hearsay?
{
"reasoning": "The statement was made in-court."
"answer": "No"
}
-
+
Q: On the issue of whether Martin punched James, the fact that Martin smiled and nodded
when asked if he did so by an officer on the scene. Is there hearsay?
{
"reasoning": "The statement is non-verbal hearsay."
"answer": "Yes"
}
-
+
Q: On the issue of Albert's wellbeing after the accident, Angela testified that he gave
a thumbs up when asked how he was feeling. Is there hearsay?
{
"reasoning": "",
"answer": ""
}
-
+
[completion]
{
"reasoning": "The statement is a non-verbal assertion, which also counts as hearsay.",
@@ -284,33 +284,33 @@ For example, instead of asking the model to "work through the problem step by st
```
[prompt 1]
Hearsay is an out-of-court statement introduced to prove the truth of the matter asserted.
-
+
In determining whether a piece of evidence meets the definition of hearsay, lawyers ask three
questions:
-
+
1. Was there a statement?
2. Was it made outside of court?
3. Is it being introduced to prove the truth of the matter asserted?
-
+
Apply this reasoning to the following question.
-
+
Q: On the issue of Albert's wellbeing after the accident, Angela testified that he gave a
thumbs up when asked how he was feeling.
-
+
[completion]
Yes, this is hearsay, based on the definition you provided.
-
+
1. There was a statement: Angela testified that Albert gave a thumbs up. This would constitute
a gesture as a form of statement, indicating Albert's state of being.
-
+
2. It was made outside of court: According to the scenario, Angela's observation of Albert's
gesture occurred outside the courtroom, making it an out-of-court statement.
-
+
3. It's being introduced to prove the truth of the matter asserted: Angela's testimony,
including the thumbs-up gesture, is being used as evidence of Albert's well-being after the
accident. The gesture is being used literally for its assertional value, suggesting that Albert
is doing well, or at least, that he is feeling fine.
-
+
Therefore, Angela's testimony regarding Albert's gesture fits the definition of hearsay and
would be treated as such in a legal setting.
@@ -321,13 +321,13 @@ The issue was analyzed correctly in the above completion, but we are seeking a c
```
[prompt 2]
Given the question below and the accompanying analysis, answer with only "Yes" or "No".
-
+
## question
{question}
-
+
## analysis
{completion_from_prompt_1}
-
+
[completion]
Yes
diff --git a/fern/pages/text-generation/prompt-engineering/crafting-effective-prompts.mdx b/fern/pages/text-generation/prompt-engineering/crafting-effective-prompts.mdx
index 826b3cc6..3cc780cc 100644
--- a/fern/pages/text-generation/prompt-engineering/crafting-effective-prompts.mdx
+++ b/fern/pages/text-generation/prompt-engineering/crafting-effective-prompts.mdx
@@ -17,14 +17,14 @@ For example:
```
## Instructions
Summarize the text below.
-
+
## Input Text
{input_text}
```
Then use the Chat API to send a message to the model:
-```python python
+```python PYTHON
import cohere
co = cohere.Client(api_key="")
@@ -54,7 +54,7 @@ Below there is a long form news article discussing the 1972 Canada–USSR Summit
an eight-game ice hockey series between the Soviet Union and Canada, held in September 1972.
Please summarize the salient points of the text and do so in a flowing high natural language
quality text. Use bullet points where appropriate.
-
+
## News Article
{news_article}
@@ -70,7 +70,7 @@ For this method, we recommend providing documents through the documents paramete
For the example above, we can split the original news article into different sections and attach them via the `documents` parameter. The Chat API will then provide us not only with the completion but also citations that ground information from the documents. See the following:
-```python python
+```python PYTHON
# Sections from the original news article
documents = [
{"title": "background", "snippet": "From the beginning of the IIHF Ice Hockey World Championships in 1920, Canada would send a ..."},
@@ -96,23 +96,23 @@ The model returns a high quality summary in `response.text`:
The 1972 Canada–USSR Summit Series marked a significant moment in the history of ice hockey,
showcasing a high-stakes competition between the Canadian national team and the Soviet Union's
elite hockey squad. Here are some key points about the series:
-
+
- The Summit Series represented a clash of hockey powers, with Canada, dominated by National Hockey
League (NHL) players, expected to beat the Soviet Union's team effortlessly. The Canadian side was
favoured by fans and pundits alike, putting the Soviets down as amateurs.
-
+
- However, the Soviet Union's purposeful shift towards adopting the Canadian style of play since
the 1940s, along with their full-time dedication to the sport, had paid off. The series saw them
emerge as formidable opponents, surprising Canada and many fans.
-
+
- Canada ultimately won the series, but the Soviet Union's impressive performance earned them the
respect of their competitors. The Canadians' preconceptions were challenged, and the series marked
a turning point in acknowledging the global nature of elite hockey.
-
+
- The success of the Summit Series led the NHL to organise further exhibitions against top Soviet
clubs, including the 1975-76 Super Series. It also paved the way for the inaugural Canada Cup
"best-on-best" tournament in 1976.
-
+
- The series heightened the profile of international ice hockey and cemented the fact that national
teams could now rival the long-dominant Canadians, transforming the sport into a truly global
phenomenon.
@@ -170,24 +170,24 @@ Then, `print(insert_citations(response.text, response.citations))` results in:
The 1972 Canada–USSR Summit Series marked a significant moment in the history of ice hockey,
showcasing a high-stakes competition between the Canadian national team [1] and the Soviet
Union's elite hockey squad. [1] Here are some key points about the series:
-
+
- The Summit Series represented a clash of hockey powers [1, 2], with Canada, dominated by
National Hockey League (NHL) players [2], expected to beat the Soviet Union's team effortlessly.[2]
The Canadian side was favoured by fans and pundits alike [2], putting the Soviets down as
amateurs. [2]
-
+
- However, the Soviet Union's purposeful shift towards adopting the Canadian style of play since
the 1940s [1], along with their full-time dedication to the sport [1], had paid off. The series
saw them emerge as formidable opponents [2], surprising Canada [2, 3] and many fans. [2]
-
+
- Canada ultimately won the series [3], but the Soviet Union's impressive performance earned them
the respect of their competitors. [3] The Canadians' preconceptions were challenged [3], and the
series marked a turning point in acknowledging the global nature of elite hockey. [3]
-
+
- The success of the Summit Series led the NHL to organise further exhibitions against top Soviet
clubs [3], including the 1975-76 Super Series. [3] It also paved the way for the inaugural Canada
Cup "best-on-best" tournament in 1976. [3]
-
+
- The series heightened the profile of international ice hockey [3] and cemented the fact that
national teams could now rival the long-dominant Canadians [3], transforming the sport into a
truly global phenomenon. [3]
@@ -200,14 +200,14 @@ LLMs respond well when they have specific examples to work from. For example, in
```
## Instructions
Below there is a long form news article discussing the 1972 Canada–USSR Summit Series, an eight-game ice hockey series between the Soviet Union and Canada, held in September 1972. Please summarize the salient points of the text and do so in a flowing high natural language quality text. Use bullet points where appropriate.
-
+
## Example Output
High level summary:
3 important events related to the series:
*
*
*
-
+
## News Article
{news_article}
@@ -237,9 +237,9 @@ Be explicit in **exactly** what you want the model to do. Be as assertive as pos
```
## Instructions
Below there is a long form news article discussing the 1972 Canada–USSR Summit Series, an eight-game ice hockey series between the Soviet Union and Canada, held in September 1972. Please summarize the salient points of the text and do so in a flowing high natural language quality text. Use bullet points where appropriate.
-
+
Paraphrase the content into re-written, easily digestible sentences. Do not extract full sentences from the input text.
-
+
## News Article
{news_article}
@@ -262,7 +262,7 @@ LLMs can easily be constrained by beginning the completion as part of the input
...
Please generate the response in a well-formed HTML document. The completion should begin as
follows:
-
+
@@ -275,7 +275,7 @@ Finally, task splitting should be used when the requested task is complex and ca
```
## Instructions
Using the included text below, perform the following steps:
-
+
1. Read through the entire text carefully
2. Extract the most important paragraph
3. From the paragraph extracted in step 2, extract the most important sentence
diff --git a/fern/pages/text-generation/tokens-and-tokenizers.mdx b/fern/pages/text-generation/tokens-and-tokenizers.mdx
index 83e211af..67b243fa 100644
--- a/fern/pages/text-generation/tokens-and-tokenizers.mdx
+++ b/fern/pages/text-generation/tokens-and-tokenizers.mdx
@@ -38,7 +38,7 @@ Cohere offers the [tokenize](/reference/tokenize) and [detokenize](/reference/de
Cohere Tokenizers are publicly hosted and can be used locally to avoid network calls. If you are using the Python SDK, the `tokenize` and `detokenize` functions will take care of downloading and caching the tokenizer for you
-```python python
+```python PYTHON
import cohere
co = cohere.Client(api_key="")
diff --git a/fern/pages/text-generation/tools/tool-use.mdx b/fern/pages/text-generation/tools/tool-use.mdx
index 0b5d2de5..f8951ff9 100644
--- a/fern/pages/text-generation/tools/tool-use.mdx
+++ b/fern/pages/text-generation/tools/tool-use.mdx
@@ -297,7 +297,7 @@ for cite in response.citations:
There may be times when a single-step tool use query doesn't turn up a tool that will help answer the question. In those scenarios the model will return an empty list of `tool_calls`, along with an empty string in the `text` field. In that case, call the Chat API again with an empty list of `tool_results`
-```python python
+```python PYTHON
res = co.chat(message=message, tools=tools)
if not response.tool_calls: