Skip to content

Commit

Permalink
Fixing accordion titles.
Browse files Browse the repository at this point in the history
  • Loading branch information
Trent Fowler authored and Trent Fowler committed Sep 25, 2024
1 parent aaeb321 commit 8376da1
Showing 1 changed file with 14 additions and 14 deletions.
28 changes: 14 additions & 14 deletions fern/pages/get-started/frequently-asked-questions.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -14,32 +14,32 @@ Here, we'll walk through some common questions we get about how Cohere's models
<AccordionGroup>
## Cohere Models

<Accordion>"What is the difference between Command R+ and other Command models?"</Accordion>
<Accordion title="What is the difference between Command R+ and other Command models?"></Accordion>

Command R+ is most suitable for those workflows that lean on complex RAG functionality and multi-step tool use (agents). Command R, on the other hand, is great for simpler retrieval augmented generation (RAG) and single-step tool use tasks, as well as applications where price is a major consideration. We offer a full model overview in our [documentation](https://docs.cohere.com/docs/models).

<Accordion>"What is the difference between Aya and Command R models?"</Accordion>
<Accordion title="What is the difference between Aya and Command R models?"></Accordion>

Aya specializes in human-like multilingual text generation and conversations, ideal for content creation and chatbots. Command R excels at understanding and executing instructions, enabling interactive applications and data-driven tasks.This makes it more suitable for many enterprise use cases.

You can check out [this link](https://cohere.com/research/aya) to learn more about Aya models, datasets and related research papers.

<Accordion>"How does Cohere's language model compare to other LLMs like GPT-4, ChatGPT, Claude, Gemini, and others?"</Accordion>
<Accordion title="How does Cohere's language model compare to other LLMs like GPT-4, ChatGPT, Claude, Gemini, and others?"></Accordion>

Cohere’s Command models have strong performance across enterprise tasks such as summarization, multilingual use cases, and retrieval augmented generation. We also have the widest range of deployment options, you can check it [here](https://cohere.com/deployment-options).

<Accordion>"How can I use Cohere's models for various tasks and applications, such as translation, text embedding, summarization, and custom tool development?"</Accordion>
<Accordion title="How can I use Cohere's models for various tasks and applications, such as translation, text embedding, summarization, and custom tool development?"></Accordion>

You can access Cohere’s models through our platform (cohere.com) or through various cloud platforms including, but not limited to, Sagemaker, Bedrock, Azure AI, and OCI Generatie AI. We also have private deployments. In terms of use case specific features, please reference the latest [API documentation](https://docs.cohere.com/reference/about) to learn more about the API features and [Cookbooks](https://docs.cohere.com/page/cookbooks) with starter codes for various tasks to aid development.

<Accordion>"What are some best practices, tips, and techniques for prompt engineering?"</Accordion>
<Accordion title="What are some best practices, tips, and techniques for prompt engineering?"></Accordion>

You can find our prompt engineering recommendations in the following resources:
- [Prompt Engineering Basics](https://cohere.com/llmu/prompt-engineering-basics)
- [Crafting Effective Prompts](https://docs.cohere.com/v1/docs/crafting-effective-prompts)
- [Advanced Prompt Engineering](https://docs.cohere.com/v1/docs/advanced-prompt-engineering-techniques)

<Accordion>"How can I effectively use and fine-tune models for specific tasks, like data extraction, question answering, and generating content within certain constraints?"</Accordion>
<Accordion title="How can I effectively use and fine-tune models for specific tasks, like data extraction, question answering, and generating content within certain constraints?"></Accordion>

To fine-tune models for tasks like data extraction, question answering, or content generation, it’s important to start by defining your goals and ensuring your data captures the task accurately.

Expand All @@ -51,15 +51,15 @@ For more details, you can refer to [Cohere’s fine-tuning guide](https://docs.c

Fine tuning is a powerful capability, but takes some effort to get right. You should first understand what you are trying to achieve and then determine if the data you are planning to train on effectively captures that task. The generative models specifically learn off of input/output pairs and therefore need to see examples of the expected input for your task and the ideal output. For more information, see our [finetuning guide](https://docs.cohere.com/v1/docs/chat-improving-the-results).

<Accordion>"What specific models are being developed by Cohere and where can I find detailed information about them?"</Accordion>
<Accordion title="What specific models are being developed by Cohere and where can I find detailed information about them?"></Accordion>

For the latest current offerings, you should reference our [models page](https://docs.cohere.com/v1/docs/models).

<Accordion>"Which model should I choose for my specific use case?"</Accordion>
<Accordion title="Which model should I choose for my specific use case?"></Accordion>

This largely depends on your use case. In general, Cohere has both generative and representation models. The [models page](https://docs.cohere.com/v1/docs/models) has more information on each of these, but use cases can often use a combination of models

<Accordion>"What are the capabilities of Cohere's models?"</Accordion>
<Accordion title="What are the capabilities of Cohere's models?"></Accordion>

Cohere models cover a wide range of capabilities from advanced generative tasks to semantic search and other representation use cases. All of our models are multilingual and can support use cases from [RAG](https://docs.cohere.com/docs/retrieval-augmented-generation-rag) to [Tool Use](https://docs.cohere.com/docs/tools) and much more.

Expand All @@ -72,7 +72,7 @@ For representation tasks, we offer two key models:

Our models perform best when used end-to-end in their intended workflows. For a detailed breakdown of each model, including their latest versions, check our [models page](https://docs.cohere.com/docs/models).

<Accordion>"What are the best practices and resources for building a search system for large PDF documents, and how can I optimize the retrieval process using language models and embeddings?"</Accordion>
<Accordion title="What are the best practices and resources for building a search system for large PDF documents, and how can I optimize the retrieval process using language models and embeddings?"></Accordion>

While this depends on the document structure itself, the best rule of thumb would be to split the PDF into its pages and then split each page into chunks that fit our context length.

Expand All @@ -84,11 +84,11 @@ You can find further guides on [chunking strategies](https://docs.cohere.com/pag

Cohere’s models offer multilingual capabilities out of the box. You can reference our example notebooks such as this [RAG one](https://docs.cohere.com/page/basic-rag) to get a better idea of how to piece these models together to build a question answering application.

<Accordion>"What are the implications and limitations of using an unsupported language in Command-R, and are there plans to expand language support?"</Accordion>
<Accordion title="What are the implications and limitations of using an unsupported language in Command-R, and are there plans to expand language support?"></Accordion>

We are always looking to expand multilingual support to other languages. Command R/R+ have been exposed to other languages during training and we encourage you to try it on your use case. If you would like to provide feedback or suggestions on additional languages, please don't hesitate to contact [email protected].

<Accordion>"Which languages are supported by Cohere models?"</Accordion>
<Accordion title="Which languages are supported by Cohere models?"></Accordion>

Cohere’s command models are optimized to perform well in the following languages: English, French, Spanish, Italian, German, Brazilian Portuguese, Japanese, Korean, Simplified Chinese, and Arabic.

Expand All @@ -100,11 +100,11 @@ You can find a full list of languages that are supported by Cohere’s multiling

You can check the range of use cases based on our customer stories [here](https://cohere.com/use-cases).

<Accordion>"What are the best practices for preparing and structuring fine-tuning data, and what are the supported file formats?"</Accordion>
<Accordion title="What are the best practices for preparing and structuring fine-tuning data, and what are the supported file formats?"></Accordion>

You can find the best practices for preparing and structuring fine-tuning data across these three modules. Data preparation for [chat fine-tuning](https://docs.cohere.com/docs/chat-preparing-the-data), [classify fine-tuning](https://docs.cohere.com/docs/classify-preparing-the-data) and [rerank fine-tuning](https://docs.cohere.com/docs/rerank-preparing-the-data). The primary file formats supported are jsonl and csv.

<Accordion>"What models are available for fine-tuning using the Cohere platform?"</Accordion>
<Accordion title="What models are available for fine-tuning using the Cohere platform?"></Accordion>

We support fine-tuning for Command R, Command R 082024 (to be launched October 1st) on the generative side, and for Classify and Rerank models on the representation side.

Expand Down

0 comments on commit 8376da1

Please sign in to comment.