Skip to content

Commit

Permalink
Add spec changes
Browse files Browse the repository at this point in the history
Co-authored-by: Lucas Fayoux <[email protected]>
  • Loading branch information
platform-endpoints and lfayoux committed Aug 27, 2024
1 parent 7c77797 commit 9704841
Show file tree
Hide file tree
Showing 13 changed files with 1,379 additions and 98 deletions.
951 changes: 870 additions & 81 deletions cohere-openapi.yaml

Large diffs are not rendered by default.

7 changes: 4 additions & 3 deletions snippets/snippets/python/chat-v2-post/default.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,10 @@
response = co.chat(
model="command-r-plus",
messages=[
cohere.v2.ChatMessage2_User(
content="hello world!"
)
{
"role": "user",
"content": "hello world!"
}
]
)

Expand Down
11 changes: 5 additions & 6 deletions snippets/snippets/python/chat-v2-post/documents.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,12 +5,11 @@
response = co.chat(
model="command-r-plus",
messages=[
cohere.v2.ChatMessage2_User(
content=[
cohere.v2.TextContent(text="Who's the best?"),
],
documents=[{'id': '1', 'title': 'The best', 'text': 'Cohere is the best!'}]
)
{
"role": "user",
"content": "Who's the best?",
"documents": [{'id': '1', 'title': 'The best', 'text': 'Cohere is the best!'}]
}
]
)

Expand Down
11 changes: 6 additions & 5 deletions snippets/snippets/python/chat-v2-post/stream.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,12 +5,13 @@
response = co.chat_stream(
model="command-r-plus",
messages=[
cohere.v2.ChatMessage2_User(
content="hello world!"
)
{
"role": "user",
"content": "hello world!"
}
]
)

for event in response:
if event.event_type == "text-generation":
print(event.text, end='')
if event.type == "content-delta":
print(event.delta.message.content.text, end='')
7 changes: 4 additions & 3 deletions snippets/snippets/python/chat-v2-post/tools.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,9 +29,10 @@
})
],
messages=[
cohere.v2.ChatMessage2_User(
content="Can you provide a sales summary for 29th September 2023, and also give me some details about the products in the 'Electronics' category, for example their prices and stock levels?"
)
{
"role": "user",
"content":"Can you provide a sales summary for 29th September 2023, and also give me some details about the products in the 'Electronics' category, for example their prices and stock levels?"
}
]
)

Expand Down
5 changes: 5 additions & 0 deletions snippets/snippets/requests/chat-v2-post/default.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
model: "command-r"
messages:
- role: "user"
content: "Tell me about LLMs"
stream: false
55 changes: 55 additions & 0 deletions snippets/snippets/requests/chat-v2-post/documents.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
model: "command-r"
messages:
- role: "user"
content: "Who is more popular: Nsync or Backstreet Boys?"
documents:
- title: "CSPC: Backstreet Boys Popularity Analysis - ChartMasters"
snippet: >-
↓ Skip to Main Content
Music industry – One step closer to being accurate
CSPC: Backstreet Boys Popularity Analysis
Hernán Lopez Posted on February 9, 2017 Posted in CSPC 72 Comments Tagged with Backstreet Boys, Boy band
At one point, Backstreet Boys defined success: massive albums sales across the globe, great singles sales, plenty of chart topping releases, hugely hyped tours and tremendous media coverage.
It is true that they benefited from extraordinarily good market conditions in all markets. After all, the all-time record year for the music business, as far as revenues in billion dollars are concerned, was actually 1999. That is, back when this five men group was at its peak.
- title: "CSPC: NSYNC Popularity Analysis - ChartMasters"
snippet: >-
↓ Skip to Main Content
Music industry – One step closer to being accurate
CSPC: NSYNC Popularity Analysis
MJD Posted on February 9, 2018 Posted in CSPC 27 Comments Tagged with Boy band, N'Sync
At the turn of the millennium three teen acts were huge in the US, the Backstreet Boys, Britney Spears and NSYNC. The latter is the only one we haven’t study so far. It took 15 years and Adele to break their record of 2,4 million units sold of No Strings Attached in its first week alone.
It wasn’t a fluke, as the second fastest selling album of the Soundscan era prior 2015, was also theirs since Celebrity debuted with 1,88 million units sold.
- title: "CSPC: Backstreet Boys Popularity Analysis - ChartMasters"
snippet: >-2
1997, 1998, 2000 and 2001 also rank amongst some of the very best years.
Yet the way many music consumers – especially teenagers and young women’s – embraced their output deserves its own chapter. If Jonas Brothers and more recently One Direction reached a great level of popularity during the past decade, the type of success achieved by Backstreet Boys is in a completely different level as they really dominated the business for a few years all over the world, including in some countries that were traditionally hard to penetrate for Western artists.
We will try to analyze the extent of that hegemony with this new article with final results which will more than surprise many readers.
- title: "CSPC: NSYNC Popularity Analysis - ChartMasters"
snippet: >-2
Was the teen group led by Justin Timberlake really that big? Was it only in the US where they found success? Or were they a global phenomenon?
As usual, I’ll be using the Commensurate Sales to Popularity Concept in order to relevantly gauge their results. This concept will not only bring you sales information for all NSYNC‘s albums, physical and download singles, as well as audio and video streaming, but it will also determine their true popularity. If you are not yet familiar with the CSPC method, the next page explains it with a short video. I fully recommend watching the video before getting into the sales figures.
stream: false
5 changes: 5 additions & 0 deletions snippets/snippets/requests/chat-v2-post/stream.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
model: "command-r"
messages:
- role: "user"
content: "Hello!"
stream: true
32 changes: 32 additions & 0 deletions snippets/snippets/requests/chat-v2-post/tools.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
model: "command-r"
messages:
- role: "user"
content: "Tell me about LLMs"
tools:
- type: function
function:
name: query_daily_sales_report
description: Connects to a database to retrieve overall sales volumes and sales
information for a given day.
parameters:
type: object
properties:
day:
description: Retrieves sales data for this day, formatted as YYYY-MM-DD.
type: str
required:
- day
- type: function
function:
name: query_product_catalog
description: Connects to a a product catalog with information about all the
products being sold, including categories, prices, and stock levels.
parameters:
type: object
properties:
category:
description: Retrieves product information data for all products in this category.
type: str
required:
- category
stream: false
15 changes: 15 additions & 0 deletions snippets/snippets/responses/chat-v2-post/default.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
body:
id: c14c80c3-18eb-4519-9460-6c92edd8cfb4
message:
role: "assistant"
content:
- type: "text"
text: "LLMs stand for Large Language Models, which are a type of neural network model specialized in processing and generating human language. They are designed to understand and respond to natural language input and have become increasingly popular and valuable in recent years.\n\nLLMs are trained on vast amounts of text data, enabling them to learn patterns, grammar, and semantic meanings present in the language. These models can then be used for various natural language processing tasks, such as text generation, summarization, question answering, machine translation, sentiment analysis, and even some aspects of natural language understanding.\n\nSome well-known examples of LLMs include:\n\n1. GPT-3 (Generative Pre-trained Transformer 3) — An open-source LLM developed by OpenAI, capable of generating human-like text and performing various language tasks.\n\n2. BERT (Bidirectional Encoder Representations from Transformers) — A Google-developed LLM that is particularly good at understanding contextual relationships in text, and is widely used for natural language understanding tasks like sentiment analysis and named entity recognition.\n\n3. T5 (Text-to-Text Transfer Transformer) — Also from Google, T5 is a flexible LLM that frames all language tasks as text-to-text problems, where the model learns to generate output text based on input text prompts.\n\n4. RoBERTa (Robustly Optimized BERT Approach) — A variant of BERT that uses additional training techniques to improve performance.\n\n5. DeBERTa (Decoding-enhanced BERT with disentangled attention) — Another variant of BERT that introduces a new attention mechanism.\n\nLLMs have become increasingly powerful and larger in scale, improving the accuracy and sophistication of language tasks. They are also being used as a foundation for developing various applications, including chatbots, content recommendation systems, language translation services, and more. \n\nThe future of LLMs holds the potential for even more sophisticated language technologies, with ongoing research and development focused on enhancing their capabilities, improving efficiency, and exploring their applications in various domains."
finish_reason: complete
usage:
billed_units:
input_tokens: 5
output_tokens: 418
tokens:
input_tokens: 71
output_tokens: 418
Loading

0 comments on commit 9704841

Please sign in to comment.