diff --git a/README.md b/README.md index 555f051d..a8339c10 100644 --- a/README.md +++ b/README.md @@ -2,10 +2,23 @@ Welcome to the Cohere Cookbook! This repository provides a collection of examples to help you get started with the Cohere API. These examples contain step-by-step guides, with code examples and explanations, to help you understand and use the API effectively. -# Getting Started -The cookbook is grouped into two categories. To get started, go to any of the categories below. You will find more details there, but here's a summary: +# Categories +The cookbook is grouped into two categories. To get started, go to any of the categories below. | Category | Description | | --- | --- | | [Guides](notebooks/guides/) | Tutorials and step-by-step guides covering a range of topics, providing practical guidance and code examples. -| [LLM University](notebooks/llmu/) | Guides for getting started with Cohere, starting with basic usage and progressing to advanced topics. The code companion to the full [LLM University course](https://llm.university/).| \ No newline at end of file +| [LLM University](notebooks/llmu/) | Guides for getting started with Cohere, starting with basic usage and progressing to advanced topics. The code companion to the full [LLM University course](https://llm.university/).| + +# Getting Started +If you are looking for a quick tour of the Cohere API, the following notebooks will help you get up and running. + +- [**Text Generation**](notebooks/llmu/Building_a_Chatbot.ipynb): Get started with the Command R+ model by building a chatbot using Cohere’s Chat endpoint. Build a chatbot that can respond to user messages and maintain the context of the conversation. + +- [**Text Embeddings**](notebooks/llmu/Introduction_Text_Embeddings.ipynb): Get started with the Embed model by generating text embeddings for a dataset. Observe graphically the relationships between documents and explore how to leverage embeddings for semantic search and clustering. + +- [**Retrieval-Augmented Generation**](notebooks/llmu/RAG_with_Chat_Embed_and_Rerank.ipynb): Build a RAG-powered chatbot that can extract relevant information from external documents and produce verifiable, inline citations in its responses. This leverages the Chat endpoint as well as the Embed v3 and Rerank 3 models. + +- [**Tool Use**](notebooks/Vanilla_Tool_Use.ipynb): Tool use allows you to connect LLMs to external tools like search engines, APIs, functions, databases, etc. In this example, build an assistant that that can query sales reports and a product catalog and provide its analysis. + +- [**Multi-Step Tool Use**](notebooks/Data_Analyst_Agent_Cohere_and_Langchain.ipynb): Multi-step tool use allows an LLM to call more than one tool in a sequence of steps, using the results from one tool call in a subsequent step. In this example, build a simple data analyst agent that is able to search the web and run code in a Python interpreter. This agent uses Cohere's Command R+ model and Langchain. diff --git a/notebooks/llmu/Embed_Endpoint.ipynb b/notebooks/llmu/Embed_Endpoint.ipynb index 595f3934..3a816048 100644 --- a/notebooks/llmu/Embed_Endpoint.ipynb +++ b/notebooks/llmu/Embed_Endpoint.ipynb @@ -17,7 +17,7 @@ "source": [ "# The Embed Endpoint\n", "\n", - "In this lab, we'll learn how to analyze a text dataset using Cohere's Embed cohere endpoint. This colab accompanies the [Classify endpoint lesson](https://docs.cohere.com/docs/embed-endpoint/) of LLM University." + "In this lab, we'll learn how to analyze a text dataset using Cohere's Embed cohere endpoint. This colab accompanies the [Embed endpoint lesson](https://docs.cohere.com/docs/embed-endpoint/) of LLM University." ] }, { diff --git a/notebooks/llmu/Visualizing_Text_Embeddings.ipynb b/notebooks/llmu/Introduction_Text_Embeddings.ipynb similarity index 99% rename from notebooks/llmu/Visualizing_Text_Embeddings.ipynb rename to notebooks/llmu/Introduction_Text_Embeddings.ipynb index 57cf868c..94efba1a 100644 --- a/notebooks/llmu/Visualizing_Text_Embeddings.ipynb +++ b/notebooks/llmu/Introduction_Text_Embeddings.ipynb @@ -4,7 +4,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "\n", + "\n", " \"Open\n", "" ] @@ -15,23 +15,11 @@ "id": "psRggLwvhi1E" }, "source": [ - "# Visualizing Text Embeddings\n", + "# Introduction to Text Embeddings\n", "\n", "Text embeddings are a useful way to turn text into numbers that capture its meaning and context. In this notebook, you'll learn how to put them into practice using Cohere's [Embed endpoint](https://docs.cohere.com/reference/embed). You'll calculate embeddings for a dataset of sentences, and plot them in the plane to observe graphically that indeed similar sentences are mapped to close points in the embedding. You'll also explore how to leverage embeddings for semantic search and clustering." ] }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Overview\n", - "\n", - "This notebook has three main sections, each with a corresponding blog post:\n", - "- **Introduction to Text Embeddings** - Understand the intuition behind text embeddings. _Read the accompanying [blog post here](https://txt.cohere.ai/introduction-to-text-embeddings/)._ \n", - "- **Introduction to Semantic Search** - Learn how to use embeddings to build a search capability that surfaces relevant information based on the semantic meaning of a query. _Read the accompanying [blog post here](https://txt.cohere.ai/introduction-to-semantic-search/)._\n", - "- **Clustering with Embeddings** - Learn how to use embeddings to group similar documents into clusters, to discover emerging patterns in the documents. _Read the accompanying [blog post here](https://docs.cohere.com/docs/clustering-with-embeddings)._" - ] - }, { "cell_type": "markdown", "metadata": {}, @@ -69,13 +57,6 @@ "from sklearn.cluster import KMeans" ] }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Fill in your Cohere API key in the next cell. To do this, begin by [signing up to Cohere](https://os.cohere.ai/) (for free!) if you haven't yet. Then get your API key [here](https://dashboard.cohere.com/api-keys)." - ] - }, { "cell_type": "code", "execution_count": 3, @@ -94,8 +75,6 @@ "\n", "In this section, we understand the intuition behind text embeddings.\n", "\n", - "_Read the accompanying [blog post here](https://txt.cohere.ai/introduction-to-text-embeddings/)._\n", - "\n", "### Step 1: Prepare the Dataset\n", "\n", "We'll work with a subset of the Airline Travel Information System (ATIS) dataset ([source](https://aclanthology.org/H90-1021/)), created based on customer inquiries related to flight bookings, flight departures, arrivals, delays, and cancellations. In the next code cell, we create and preview a dataframe `df` containing 91 queries." diff --git a/notebooks/llmu/Introduction_to_RAG.ipynb b/notebooks/llmu/Introduction_to_RAG.ipynb index dc2fecce..e5cde92d 100644 --- a/notebooks/llmu/Introduction_to_RAG.ipynb +++ b/notebooks/llmu/Introduction_to_RAG.ipynb @@ -215,7 +215,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 2, "metadata": { "id": "CdxeI3XW4yIH" }, @@ -256,7 +256,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 3, "metadata": { "colab": { "base_uri": "https://localhost:8080/" @@ -269,18 +269,15 @@ "name": "stdout", "output_type": "stream", "text": [ - "The tallest living species of penguin is the emperor penguin (Apteryx australis), which can measure up to 1.6 to 1.8 m (5.2 to 6 ft) when fully grown.\n", + "The tallest living penguins are emperor penguins, which are found only in Antarctica.\n", "\n", "CITATIONS:\n", - "start=45 end=60 text='emperor penguin' document_ids=['doc_0']\n", - "start=61 end=80 text='(Apteryx australis)' document_ids=['doc_0']\n", - "start=106 end=118 text='1.6 to 1.8 m' document_ids=['doc_0']\n", - "start=119 end=132 text='(5.2 to 6 ft)' document_ids=['doc_0']\n", + "start=32 end=48 text='emperor penguins' document_ids=['doc_0']\n", + "start=66 end=85 text='only in Antarctica.' document_ids=['doc_1']\n", "\n", "DOCUMENTS:\n", "{'id': 'doc_0', 'text': 'Emperor penguins are the tallest.', 'title': 'Tall penguins'}\n", - "{'id': 'doc_1', 'text': 'Emperor penguins only live in Antarctica.', 'title': 'Penguin habitats'}\n", - "{'id': 'doc_2', 'text': 'Animals are different from plants.', 'title': 'What are animals?'}\n" + "{'id': 'doc_1', 'text': 'Emperor penguins only live in Antarctica.', 'title': 'Penguin habitats'}\n" ] } ], @@ -290,9 +287,9 @@ "\n", "# Generate the response\n", "response = co.chat_stream(message=message,\n", + " model=\"command-r-plus\",\n", " documents=documents)\n", "\n", - "\n", "# Display the response\n", "citations = []\n", "cited_documents = []\n", @@ -302,8 +299,8 @@ " print(event.text, end=\"\")\n", " elif event.event_type == \"citation-generation\":\n", " citations.extend(event.citations)\n", - " elif event.event_type == \"search-results\":\n", - " cited_documents = event.documents\n", + " elif event.event_type == \"stream-end\":\n", + " cited_documents = event.response.documents\n", "\n", "# Display the citations and source documents\n", "if citations:\n", @@ -326,7 +323,16 @@ "name": "python3" }, "language_info": { - "name": "python" + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.4" } }, "nbformat": 4, diff --git a/notebooks/llmu/RAG_over_Large_Scale_Data.ipynb b/notebooks/llmu/RAG_over_Large_Scale_Data.ipynb index f4f50900..d2d1f330 100644 --- a/notebooks/llmu/RAG_over_Large_Scale_Data.ipynb +++ b/notebooks/llmu/RAG_over_Large_Scale_Data.ipynb @@ -63,30 +63,6 @@ "# Setup" ] }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "cellView": "form", - "id": "fEVf6y-Ffsiu" - }, - "outputs": [], - "source": [ - "#@title Enable text wrapping in Google Colab\n", - "\n", - "from IPython.display import HTML, display\n", - "\n", - "def set_css():\n", - " display(HTML('''\n", - " \n", - " '''))\n", - "get_ipython().events.register('pre_run_cell', set_css)" - ] - }, { "cell_type": "code", "execution_count": null, @@ -116,7 +92,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 1, "metadata": { "colab": { "base_uri": "https://localhost:8080/", @@ -125,36 +101,37 @@ "id": "hmAgCSbGklDC", "outputId": "f5435191-a7cd-41b4-c780-f3aebebb38fb" }, - "outputs": [ - { - "data": { - "text/html": [ - "\n", - " \n", - " " - ], - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - } - ], + "outputs": [], "source": [ "import cohere\n", "from cohere import ChatConnector\n", - "import os\n", "import uuid\n", "from typing import List, Dict\n", "\n", "co = cohere.Client(\"COHERE_API_KEY\") # Get your API key here: https://dashboard.cohere.com/api-keys" ] }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "#@title Enable text wrapping in Google Colab\n", + "\n", + "from IPython.display import HTML, display\n", + "\n", + "def set_css():\n", + " display(HTML('''\n", + " \n", + " '''))\n", + "get_ipython().events.register('pre_run_cell', set_css)" + ] + }, { "cell_type": "markdown", "metadata": { @@ -177,7 +154,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 5, "metadata": { "colab": { "base_uri": "https://localhost:8080/", @@ -186,26 +163,7 @@ "id": "CC6cSdhnkpS-", "outputId": "ac6dc379-e0d6-4683-9b49-0d9fcdafd8c2" }, - "outputs": [ - { - "data": { - "text/html": [ - "\n", - " \n", - " " - ], - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - } - ], + "outputs": [], "source": [ "class Chatbot:\n", " def __init__(self, connectors: List[str]):\n", @@ -229,13 +187,13 @@ " if message.lower() == \"quit\":\n", " print(\"Ending chat.\")\n", " break\n", - " # else: # Uncomment for Google Colab to avoid printing the same thing twice\n", - " # print(f\"User: {message}\") # Uncomment for Google Colab to avoid printing the same thing twice\n", + " else: # If using Google Colab, remove this line to avoid printing the same thing twice\n", + " print(f\"User: {message}\") # If using Google Colab, remove this line to avoid printing the same thing twice\n", "\n", " # Generate response\n", " response = co.chat_stream(\n", " message=message,\n", - " model=\"command-r\",\n", + " model=\"command-r-plus\",\n", " conversation_id=self.conversation_id,\n", " connectors=self.connectors,\n", " )\n", @@ -251,8 +209,8 @@ " print(event.text, end=\"\")\n", " elif event.event_type == \"citation-generation\":\n", " citations.extend(event.citations)\n", - " elif event.event_type == \"search-results\":\n", - " cited_documents = event.documents\n", + " elif event.event_type == \"stream-end\":\n", + " cited_documents = event.response.documents\n", "\n", " # Display citations and source documents\n", " if citations:\n", @@ -303,7 +261,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 6, "metadata": { "colab": { "base_uri": "https://localhost:8080/", @@ -313,24 +271,6 @@ "outputId": "49da15ac-7606-48c6-dca6-b3d462a13361" }, "outputs": [ - { - "data": { - "text/html": [ - "\n", - " \n", - " " - ], - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, { "name": "stdout", "output_type": "stream", @@ -338,104 +278,46 @@ "User: What is chain of thought prompting\n", "\n", "Chatbot:\n", - "Chain of thought prompting is a technique used with large language models (LLMs) to enhance their reasoning capabilities. The LLM is presented with a few examples demonstrating a step-by-step reasoning process leading to a correct answer. This method can be employed when dealing with complex problems that require breaking down into smaller, more manageable parts. \n", - "\n", - "For instance, if you were to ask an LLM to solve a linear equation, you would first show how to solve this type of equation by outlining the intermediate steps. The LLM would then attempt to solve the given problem using a similar step-by-step approach.\n", - "\n", - "This prompting technique is particularly useful for arithmetic, commonsense, and symbolic reasoning tasks and can be combined with few-shot prompting for better results on more complex problems.\n", - "\n", - "CITATIONS:\n", - "start=52 end=73 text='large language models' document_ids=['web-search_0', 'web-search_2', 'web-search_3', 'web-search_4', 'web-search_5', 'web-search_6', 'web-search_7', 'web-search_9', 'demo-conn-gdrive-6bfrp6_11', 'demo-conn-gdrive-6bfrp6_12']\n", - "start=74 end=80 text='(LLMs)' document_ids=['web-search_0', 'web-search_2', 'web-search_3', 'web-search_4', 'web-search_5', 'web-search_6', 'web-search_7', 'web-search_9', 'demo-conn-gdrive-6bfrp6_11', 'demo-conn-gdrive-6bfrp6_12']\n", - "start=84 end=121 text='enhance their reasoning capabilities.' document_ids=['web-search_0', 'web-search_1', 'web-search_2', 'web-search_4', 'web-search_5', 'web-search_6', 'web-search_7', 'web-search_9']\n", - "start=148 end=162 text='a few examples' document_ids=['web-search_0', 'web-search_1', 'web-search_2', 'web-search_3', 'web-search_4', 'web-search_5', 'web-search_6', 'web-search_7', 'demo-conn-gdrive-6bfrp6_11']\n", - "start=179 end=209 text='step-by-step reasoning process' document_ids=['web-search_0', 'web-search_1', 'web-search_2', 'web-search_3', 'web-search_4', 'web-search_5', 'web-search_6', 'web-search_7', 'web-search_9', 'demo-conn-gdrive-6bfrp6_11']\n", - "start=223 end=238 text='correct answer.' document_ids=['web-search_1', 'web-search_2', 'web-search_3', 'web-search_4', 'web-search_5', 'web-search_6', 'web-search_7', 'demo-conn-gdrive-6bfrp6_11']\n", - "start=285 end=301 text='complex problems' document_ids=['web-search_1', 'web-search_2', 'web-search_3', 'web-search_4', 'web-search_5', 'web-search_6', 'web-search_7']\n", - "start=334 end=365 text='smaller, more manageable parts.' document_ids=['web-search_4', 'web-search_6']\n", - "start=411 end=434 text='solve a linear equation' document_ids=['web-search_2']\n", - "start=452 end=491 text='show how to solve this type of equation' document_ids=['web-search_2']\n", - "start=495 end=528 text='outlining the intermediate steps.' document_ids=['web-search_2', 'web-search_3', 'web-search_4', 'web-search_5', 'web-search_6', 'web-search_7', 'demo-conn-gdrive-6bfrp6_11']\n", - "start=548 end=621 text='attempt to solve the given problem using a similar step-by-step approach.' document_ids=['web-search_2', 'web-search_4']\n", - "start=675 end=685 text='arithmetic' document_ids=['web-search_0', 'web-search_2', 'web-search_3', 'web-search_4', 'web-search_5', 'web-search_6', 'web-search_7', 'web-search_9']\n", - "start=687 end=698 text='commonsense' document_ids=['web-search_0', 'web-search_2', 'web-search_3', 'web-search_4', 'web-search_5', 'web-search_6', 'web-search_7', 'web-search_9']\n", - "start=704 end=728 text='symbolic reasoning tasks' document_ids=['web-search_0', 'web-search_2', 'web-search_3', 'web-search_4', 'web-search_5', 'web-search_6', 'web-search_7', 'web-search_9']\n", - "start=740 end=772 text='combined with few-shot prompting' document_ids=['web-search_1', 'web-search_2']\n", - "start=777 end=817 text='better results on more complex problems.' document_ids=['web-search_1']\n", - "\n", - "DOCUMENTS:\n", - "{'id': 'web-search_0', 'text': 'Skip to main content\\n\\nWe gratefully acknowledge su...'}\n", - "{'id': 'web-search_1', 'text': 'General Tips for Designing Prompts\\n\\nChain-of-Thoug...'}\n", - "{'id': 'web-search_2', 'text': 'BlogDocsCommunityHackAPrompt Playground\\n\\nLanguage ...'}\n", - "{'id': 'web-search_3', 'text': 'We now support using Microsoft Azure hosted OpenAI...'}\n", - "{'id': 'web-search_4', 'text': 'Comprehensive Guide to Chain-of-Thought Prompting\\n...'}\n", - "{'id': 'web-search_5', 'text': 'ResourcesArticleChain-of-Thought Prompting: Helpin...'}\n", - "{'id': 'web-search_6', 'text': 'Let’s Think Step by Step: Advanced Reasoning in Bu...'}\n", - "{'id': 'web-search_7', 'text': 'Unraveling the Power of Chain-of-Thought Prompting...'}\n", - "{'id': 'web-search_8', 'text': 'AboutPressCopyrightContact usCreatorsAdvertiseDeve...'}\n", - "{'id': 'web-search_9', 'text': 'Skip to main content\\n\\nLanguage Models Perform Reas...'}\n", - "{'id': 'demo-conn-gdrive-6bfrp6_10', 'text': \"\\ufeffChaining Prompts\\r\\nIn this chapter, you'll learn a...\"}\n", - "{'id': 'demo-conn-gdrive-6bfrp6_11', 'text': \"\\ufeffConstructing Prompts\\r\\nIn this chapter, you'll lea...\"}\n", - "{'id': 'demo-conn-gdrive-6bfrp6_12', 'text': \"\\ufeffUse Case Patterns\\r\\nIn this chapter, you'll learn ...\"}\n", - "{'id': 'demo-conn-gdrive-6bfrp6_13', 'text': \"\\ufeffEvaluating Outputs\\r\\nIn this chapter, you'll learn...\"}\n", - "{'id': 'demo-conn-gdrive-6bfrp6_14', 'text': \"\\ufeffValidating Outputs\\r\\nIn this chapter, you'll learn...\"}\n", - "\n", - "----------------------------------------------------------------------------------------------------\n", - "\n", - "User: tell me more\n", - "\n", - "Chatbot:\n", - "Chain of thought prompting is a technique that guides LLMs to follow a reasoning process by providing them with a few examples that clearly outline each step of the reasoning. This method, also known as few-shot prompting, is employed for complex problems that require a series of reasoning steps to solve. \n", - "\n", - "The LLM is expected to study the example and follow a similar pattern when answering, breaking down the problem into smaller, more manageable parts. This approach not only improves the LLM's performance on complex tasks but also offers interpretability into its thought process.\n", + "Chain-of-Thought (CoT) prompting is a technique used to guide Large Language Models (LLMs) to follow a reasoning process when dealing with complex problems. This is done by providing the model with a few examples where the step-by-step reasoning is clearly laid out. The model is then expected to follow that \"chain of thought\" reasoning to get to the correct answer.\n", "\n", - "Few-shot prompting is distinct from zero-shot prompting, where the LLM is only given the problem and no examples. Zero-shot chain-of-thought prompting, however, involves adding a phrase like \"Let's think step by step\" to the original prompt to guide the LLM's reasoning. \n", + "CoT prompting is a prompt engineering technique that aims to improve language models' performance on tasks requiring logic, calculation and decision-making by structuring the input prompt in a way that mimics human reasoning.\n", "\n", - "Chain of thought prompting has shown remarkable effectiveness in improving LLMs' abilities in arithmetic, commonsense, and symbolic reasoning tasks. Nevertheless, it is not without its limitations. For instance, it works best with larger models, typically those with around 100 billion parameters, as smaller models often produce illogical thought chains.\n", + "To construct a chain-of-thought prompt, a user typically appends an instruction such as \"Describe your reasoning in steps\" or \"Let's think step by step\" to the end of their query to a large language model (LLM). This encourages the model to generate intermediate steps before providing a final answer.\n", "\n", "CITATIONS:\n", - "start=47 end=58 text='guides LLMs' document_ids=['web-search_0', 'web-search_1', 'web-search_2', 'web-search_3', 'web-search_4', 'web-search_5', 'web-search_6', 'web-search_7', 'web-search_9', 'demo-conn-gdrive-6bfrp6_11']\n", - "start=71 end=88 text='reasoning process' document_ids=['web-search_0', 'web-search_1', 'web-search_2', 'web-search_3', 'web-search_4', 'web-search_5', 'web-search_6', 'web-search_7', 'web-search_9', 'demo-conn-gdrive-6bfrp6_11']\n", - "start=114 end=126 text='few examples' document_ids=['web-search_0', 'web-search_1', 'web-search_2', 'web-search_3', 'web-search_4', 'web-search_5', 'web-search_6', 'web-search_7', 'web-search_9', 'demo-conn-gdrive-6bfrp6_11']\n", - "start=140 end=175 text='outline each step of the reasoning.' document_ids=['web-search_0', 'web-search_1', 'web-search_2', 'web-search_3', 'web-search_4', 'web-search_5', 'web-search_6', 'web-search_7', 'demo-conn-gdrive-6bfrp6_11']\n", - "start=203 end=221 text='few-shot prompting' document_ids=['web-search_2', 'web-search_3', 'web-search_4', 'demo-conn-gdrive-6bfrp6_11']\n", - "start=239 end=255 text='complex problems' document_ids=['web-search_0', 'web-search_1', 'web-search_2', 'web-search_3', 'web-search_4', 'web-search_5', 'web-search_6', 'web-search_7', 'web-search_9']\n", - "start=271 end=296 text='series of reasoning steps' document_ids=['web-search_0', 'web-search_1', 'web-search_2', 'web-search_3', 'web-search_4', 'web-search_5', 'web-search_6', 'web-search_7', 'web-search_9', 'demo-conn-gdrive-6bfrp6_11']\n", - "start=363 end=378 text='similar pattern' document_ids=['web-search_0', 'web-search_1', 'web-search_2', 'web-search_3', 'web-search_4', 'web-search_5', 'web-search_6', 'web-search_7', 'web-search_9', 'demo-conn-gdrive-6bfrp6_11']\n", - "start=426 end=457 text='smaller, more manageable parts.' document_ids=['web-search_0', 'web-search_1', 'web-search_2', 'web-search_3', 'web-search_4', 'web-search_5', 'web-search_6', 'web-search_7', 'web-search_9', 'demo-conn-gdrive-6bfrp6_11']\n", - "start=481 end=511 text=\"improves the LLM's performance\" document_ids=['web-search_0', 'web-search_1', 'web-search_2', 'web-search_3', 'web-search_4', 'web-search_5', 'web-search_6', 'web-search_7', 'web-search_9', 'demo-conn-gdrive-6bfrp6_11']\n", - "start=545 end=561 text='interpretability' document_ids=['web-search_2', 'web-search_4', 'web-search_5', 'web-search_6']\n", - "start=625 end=644 text='zero-shot prompting' document_ids=['web-search_4', 'demo-conn-gdrive-6bfrp6_11']\n", - "start=703 end=739 text='Zero-shot chain-of-thought prompting' document_ids=['web-search_1', 'web-search_2', 'web-search_3', 'web-search_4', 'web-search_5', 'web-search_6', 'web-search_7', 'demo-conn-gdrive-6bfrp6_11']\n", - "start=780 end=806 text='\"Let\\'s think step by step\"' document_ids=['web-search_1', 'web-search_3', 'web-search_4', 'web-search_6', 'demo-conn-gdrive-6bfrp6_11']\n", - "start=833 end=859 text=\"guide the LLM's reasoning.\" document_ids=['web-search_1', 'web-search_3', 'web-search_4', 'web-search_5', 'web-search_6', 'web-search_7', 'demo-conn-gdrive-6bfrp6_11']\n", - "start=956 end=966 text='arithmetic' document_ids=['web-search_0', 'web-search_2', 'web-search_3', 'web-search_4', 'web-search_5', 'web-search_6', 'web-search_7', 'web-search_9']\n", - "start=968 end=979 text='commonsense' document_ids=['web-search_0', 'web-search_2', 'web-search_3', 'web-search_4', 'web-search_5', 'web-search_6', 'web-search_7', 'web-search_9']\n", - "start=985 end=1010 text='symbolic reasoning tasks.' document_ids=['web-search_0', 'web-search_2', 'web-search_3', 'web-search_4', 'web-search_5', 'web-search_6', 'web-search_7', 'web-search_9']\n", - "start=1093 end=1106 text='larger models' document_ids=['web-search_0', 'web-search_2', 'web-search_3', 'web-search_4', 'web-search_6', 'web-search_7', 'web-search_9']\n", - "start=1136 end=1158 text='100 billion parameters' document_ids=['web-search_0', 'web-search_2', 'web-search_3', 'web-search_4', 'web-search_6', 'web-search_7', 'web-search_9']\n", - "start=1163 end=1217 text='smaller models often produce illogical thought chains.' document_ids=['web-search_0', 'web-search_2', 'web-search_3', 'web-search_4', 'web-search_6', 'web-search_7']\n", + "start=17 end=22 text='(CoT)' document_ids=['web-search_0', 'web-search_1', 'web-search_2', 'web-search_3', 'web-search_5', 'web-search_7', 'web-search_8', 'demo-conn-gdrive-6bfrp6_11', 'demo-conn-gdrive-6bfrp6_12']\n", + "start=56 end=61 text='guide' document_ids=['web-search_3', 'web-search_4', 'web-search_7']\n", + "start=62 end=83 text='Large Language Models' document_ids=['web-search_0', 'web-search_2', 'web-search_3', 'web-search_4', 'web-search_5', 'web-search_7', 'demo-conn-gdrive-6bfrp6_11']\n", + "start=84 end=90 text='(LLMs)' document_ids=['web-search_0', 'web-search_2', 'web-search_3', 'web-search_4', 'web-search_5', 'web-search_7', 'demo-conn-gdrive-6bfrp6_11']\n", + "start=94 end=120 text='follow a reasoning process' document_ids=['web-search_1', 'web-search_3', 'web-search_4', 'web-search_7']\n", + "start=139 end=156 text='complex problems.' document_ids=['web-search_3', 'web-search_5', 'web-search_7']\n", + "start=200 end=212 text='few examples' document_ids=['web-search_1', 'web-search_3', 'web-search_5', 'web-search_7', 'demo-conn-gdrive-6bfrp6_11']\n", + "start=223 end=266 text='step-by-step reasoning is clearly laid out.' document_ids=['web-search_1', 'web-search_3', 'web-search_4', 'web-search_5', 'web-search_7', 'demo-conn-gdrive-6bfrp6_11']\n", + "start=297 end=337 text='follow that \"chain of thought\" reasoning' document_ids=['web-search_3', 'web-search_5']\n", + "start=341 end=367 text='get to the correct answer.' document_ids=['web-search_3', 'web-search_4', 'web-search_5']\n", + "start=388 end=416 text='prompt engineering technique' document_ids=['web-search_4', 'web-search_5']\n", + "start=430 end=466 text=\"improve language models' performance\" document_ids=['web-search_4']\n", + "start=486 end=524 text='logic, calculation and decision-making' document_ids=['web-search_4']\n", + "start=528 end=556 text='structuring the input prompt' document_ids=['web-search_4']\n", + "start=571 end=594 text='mimics human reasoning.' document_ids=['web-search_4']\n", + "start=684 end=718 text='\"Describe your reasoning in steps\"' document_ids=['web-search_4', 'demo-conn-gdrive-6bfrp6_11']\n", + "start=722 end=748 text='\"Let\\'s think step by step\"' document_ids=['web-search_1', 'web-search_3', 'web-search_5', 'web-search_8', 'demo-conn-gdrive-6bfrp6_11', 'demo-conn-gdrive-6bfrp6_12']\n", + "start=846 end=864 text='intermediate steps' document_ids=['web-search_0', 'web-search_1', 'web-search_2', 'web-search_3', 'web-search_4', 'web-search_5', 'web-search_7', 'demo-conn-gdrive-6bfrp6_11']\n", "\n", "DOCUMENTS:\n", "{'id': 'web-search_0', 'text': 'Skip to main content\\n\\nWe gratefully acknowledge su...'}\n", "{'id': 'web-search_1', 'text': 'General Tips for Designing Prompts\\n\\nChain-of-Thoug...'}\n", "{'id': 'web-search_2', 'text': 'BlogDocsCommunityHackAPrompt Playground\\n\\nLanguage ...'}\n", "{'id': 'web-search_3', 'text': 'We now support using Microsoft Azure hosted OpenAI...'}\n", - "{'id': 'web-search_4', 'text': 'Comprehensive Guide to Chain-of-Thought Prompting\\n...'}\n", - "{'id': 'web-search_5', 'text': 'ResourcesArticleChain-of-Thought Prompting: Helpin...'}\n", - "{'id': 'web-search_6', 'text': 'Let’s Think Step by Step: Advanced Reasoning in Bu...'}\n", - "{'id': 'web-search_7', 'text': 'Unraveling the Power of Chain-of-Thought Prompting...'}\n", - "{'id': 'web-search_8', 'text': 'AboutPressCopyrightContact usCreatorsAdvertiseDeve...'}\n", - "{'id': 'web-search_9', 'text': 'Skip to main content\\n\\nLanguage Models Perform Reas...'}\n", - "{'id': 'demo-conn-gdrive-6bfrp6_10', 'text': \"\\ufeffChaining Prompts\\r\\nIn this chapter, you'll learn a...\"}\n", + "{'id': 'web-search_5', 'text': 'Comprehensive Guide to Chain-of-Thought Prompting\\n...'}\n", + "{'id': 'web-search_7', 'text': 'ResourcesArticleChain-of-Thought Prompting: Helpin...'}\n", + "{'id': 'web-search_8', 'text': 'Skip to main content\\n\\nScan this QR code to downloa...'}\n", "{'id': 'demo-conn-gdrive-6bfrp6_11', 'text': \"\\ufeffConstructing Prompts\\r\\nIn this chapter, you'll lea...\"}\n", "{'id': 'demo-conn-gdrive-6bfrp6_12', 'text': \"\\ufeffUse Case Patterns\\r\\nIn this chapter, you'll learn ...\"}\n", - "{'id': 'demo-conn-gdrive-6bfrp6_13', 'text': \"\\ufeffEvaluating Outputs\\r\\nIn this chapter, you'll learn...\"}\n", - "{'id': 'demo-conn-gdrive-6bfrp6_14', 'text': \"\\ufeffValidating Outputs\\r\\nIn this chapter, you'll learn...\"}\n", + "{'id': 'web-search_4', 'text': 'Tech Accelerator What is generative AI? Everything...'}\n", "\n", "----------------------------------------------------------------------------------------------------\n", "\n", - "User: quit\n", "Ending chat.\n" ] } @@ -485,6 +367,8 @@ "source": [ "### Chunking\n", "\n", + "The command-r family of models supports a large context length (128k tokens), offering ample room for retrieved documents. However, in the scenario where this context length is exceeded, the automated chunking feature will be activated. \n", + "\n", "With every document sent by the connectors, the first step is to split it into smaller chunks. Each chunk is between 100 and 400 words, and sentences are kept intact where possible.\n" ] }, @@ -540,7 +424,16 @@ "name": "python3" }, "language_info": { - "name": "python" + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.4" } }, "nbformat": 4, diff --git a/notebooks/llmu/RAG_with_Chat_Embed_and_Rerank.ipynb b/notebooks/llmu/RAG_with_Chat_Embed_and_Rerank.ipynb index bfea23ba..e8b12502 100644 --- a/notebooks/llmu/RAG_with_Chat_Embed_and_Rerank.ipynb +++ b/notebooks/llmu/RAG_with_Chat_Embed_and_Rerank.ipynb @@ -93,7 +93,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 1, "id": "f3a03a57", "metadata": { "id": "f3a03a57" @@ -112,7 +112,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 3, "id": "Dx1cncziCWBB", "metadata": { "cellView": "form", @@ -169,7 +169,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 3, "id": "3dca4a88", "metadata": { "colab": { @@ -179,26 +179,7 @@ "id": "3dca4a88", "outputId": "b05da1ee-0456-4387-c232-a43e0ffed54c" }, - "outputs": [ - { - "data": { - "text/html": [ - "\n", - " \n", - " " - ], - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - } - ], + "outputs": [], "source": [ "raw_documents = [\n", " {\n", @@ -244,7 +225,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 4, "id": "7c33412c", "metadata": { "colab": { @@ -254,26 +235,7 @@ "id": "7c33412c", "outputId": "cf04f8ed-8000-4433-f976-2d37747f21e7" }, - "outputs": [ - { - "data": { - "text/html": [ - "\n", - " \n", - " " - ], - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - } - ], + "outputs": [], "source": [ "class Vectorstore:\n", " \"\"\"\n", @@ -372,13 +334,16 @@ " doc_ids = self.idx.knn_query(query_emb, k=self.retrieve_top_k)[0][0]\n", "\n", " # Reranking\n", - " docs_to_rerank = [self.docs[doc_id][\"text\"] for doc_id in doc_ids]\n", + " rank_fields = [\"title\", \"text\"] # We'll use the title and text fields for reranking\n", + "\n", + " docs_to_rerank = [self.docs[doc_id] for doc_id in doc_ids]\n", "\n", " rerank_results = co.rerank(\n", " query=query,\n", " documents=docs_to_rerank,\n", " top_n=self.rerank_top_k,\n", - " model=\"rerank-english-v2.0\",\n", + " model=\"rerank-english-v3.0\",\n", + " rank_fields=rank_fields\n", " )\n", "\n", " doc_ids_reranked = [doc_ids[result.index] for result in rerank_results.results]\n", @@ -408,7 +373,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 5, "id": "4643e630", "metadata": { "colab": { @@ -419,49 +384,14 @@ "outputId": "fe01fcb6-3574-4322-d8d0-57d37aad397d" }, "outputs": [ - { - "data": { - "text/html": [ - "\n", - " \n", - " " - ], - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Loading documents...\n" - ] - }, - { - "name": "stderr", - "output_type": "stream", - "text": [ - "[nltk_data] Downloading package punkt to /root/nltk_data...\n", - "[nltk_data] Unzipping tokenizers/punkt.zip.\n", - "[nltk_data] Downloading package averaged_perceptron_tagger to\n", - "[nltk_data] /root/nltk_data...\n", - "[nltk_data] Unzipping taggers/averaged_perceptron_tagger.zip.\n" - ] - }, { "name": "stdout", "output_type": "stream", "text": [ + "Loading documents...\n", "Embedding document chunks...\n", "Indexing document chunks...\n", - "Indexing complete with 136 document chunks.\n" + "Indexing complete with 134 document chunks.\n" ] } ], @@ -508,7 +438,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 6, "id": "82617b91", "metadata": { "colab": { @@ -519,24 +449,6 @@ "outputId": "7f1f2bc8-8ed9-4190-bd6b-7af2d9dc1980" }, "outputs": [ - { - "data": { - "text/html": [ - "\n", - " \n", - " " - ], - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, { "data": { "text/plain": [ @@ -547,11 +459,11 @@ " 'text': \"What you learned in this chapter is simple self-attention. However, we can do much better than that. There is a method called multi-head attention, in which one doesn't only consider one embedding, but several different ones. These are all obtained from the original by transforming it in different ways. Multi-head attention has been very successful at the task of adding context to text. If you'd like to learn more about the self and multi-head attention, you can check out the following two\",\n", " 'url': 'https://docs.cohere.com/docs/the-attention-mechanism'},\n", " {'title': 'Transformer Models',\n", - " 'text': 'Attention helps give context to each word, based on the other words in the sentence (or text).',\n", + " 'text': 'Attention is a very useful technique that helps language models understand the context. In order to understand how attention works, consider the following two sentences:\\n\\nSentence 1: The bank of the river.\\n\\nSentence 2: Money in the bank.',\n", " 'url': 'https://docs.cohere.com/docs/transformer-models'}]" ] }, - "execution_count": 8, + "execution_count": 6, "metadata": {}, "output_type": "execute_result" } @@ -606,7 +518,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 7, "id": "d2c15a1f", "metadata": { "colab": { @@ -616,26 +528,7 @@ "id": "d2c15a1f", "outputId": "8daa9159-338c-45ec-e9ed-830aedcdf0d8" }, - "outputs": [ - { - "data": { - "text/html": [ - "\n", - " \n", - " " - ], - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - } - ], + "outputs": [], "source": [ "class Chatbot:\n", " def __init__(self, vectorstore: Vectorstore):\n", @@ -662,8 +555,8 @@ " if message.lower() == \"quit\":\n", " print(\"Ending chat.\")\n", " break\n", - " # else: # Uncomment for Google Colab to avoid printing the same thing twice\n", - " # print(f\"User: {message}\") # Uncomment for Google Colab to avoid printing the same thing twice\n", + " else: # If using Google Colab, remove this line to avoid printing the same thing twice\n", + " print(f\"User: {message}\") # If using Google Colab, remove this line to avoid printing the same thing twice\n", "\n", " # Generate search queries (if any)\n", " response = co.chat(message=message,\n", @@ -682,7 +575,7 @@ " # Use document chunks to respond\n", " response = co.chat_stream(\n", " message=message,\n", - " model=\"command-r\",\n", + " model=\"command-r-plus\",\n", " documents=documents,\n", " conversation_id=self.conversation_id,\n", " )\n", @@ -691,7 +584,7 @@ " else:\n", " response = co.chat_stream(\n", " message=message,\n", - " model=\"command-r\",\n", + " model=\"command-r-plus\",\n", " conversation_id=self.conversation_id,\n", " )\n", "\n", @@ -706,8 +599,8 @@ " print(event.text, end=\"\")\n", " elif event.event_type == \"citation-generation\":\n", " citations.extend(event.citations)\n", - " elif event.event_type == \"search-results\":\n", - " cited_documents = event.documents\n", + " elif event.event_type == \"stream-end\":\n", + " cited_documents = event.response.documents\n", "\n", " # Display citations and source documents\n", " if citations:\n", @@ -750,7 +643,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 8, "id": "42d3f345", "metadata": { "colab": { @@ -761,24 +654,6 @@ "outputId": "8b935c8b-b1d4-4913-bdf8-73ba503402b8" }, "outputs": [ - { - "data": { - "text/html": [ - "\n", - " \n", - " " - ], - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, { "name": "stdout", "output_type": "stream", @@ -786,73 +661,60 @@ "User: Hello, I have a question\n", "\n", "Chatbot:\n", - "Hello! What's your question? I'm here to help you in any way I can.\n", + "Of course! I am here to assist you. Please go ahead with your question, and I will do my best to provide a helpful response.\n", "----------------------------------------------------------------------------------------------------\n", "\n", "User: What’s the difference between word and sentence embeddings\n", "Retrieving information...\n", "Chatbot:\n", - "Word embeddings associate words with lists of numbers. Similar words are assigned numbers that are mathematically close while dissimilar words are assigned numbers that are far apart. \n", - "\n", - "Sentence embeddings do the same thing as word embeddings, but for sentences. Each sentence is associated with a vector of numbers in a coherent way. This means that similar sentences are assigned similar vectors and dissimilar sentences are assigned different vectors.\n", + "Word embeddings are a way to associate words with lists of numbers (vectors) so that similar words are associated with numbers that are close by, and dissimilar words with numbers that are far away from each other. Sentence embeddings do the same thing but associate a vector to an entire sentence. Similar sentences are assigned similar vectors, and different sentences are assigned different vectors. Each coordinate in the vector identifies some property of the sentence.\n", "\n", "CITATIONS:\n", - "start=0 end=15 text='Word embeddings' document_ids=['doc_0']\n", - "start=16 end=54 text='associate words with lists of numbers.' document_ids=['doc_0']\n", - "start=55 end=68 text='Similar words' document_ids=['doc_0']\n", - "start=82 end=119 text='numbers that are mathematically close' document_ids=['doc_0']\n", - "start=126 end=142 text='dissimilar words' document_ids=['doc_0']\n", - "start=156 end=183 text='numbers that are far apart.' document_ids=['doc_0']\n", - "start=186 end=205 text='Sentence embeddings' document_ids=['doc_0', 'doc_2']\n", - "start=213 end=242 text='same thing as word embeddings' document_ids=['doc_0', 'doc_2']\n", - "start=263 end=276 text='Each sentence' document_ids=['doc_0', 'doc_2']\n", - "start=298 end=315 text='vector of numbers' document_ids=['doc_0', 'doc_2']\n", - "start=321 end=329 text='coherent' document_ids=['doc_2']\n", - "start=351 end=368 text='similar sentences' document_ids=['doc_0', 'doc_2']\n", - "start=382 end=397 text='similar vectors' document_ids=['doc_0', 'doc_2']\n", - "start=402 end=422 text='dissimilar sentences' document_ids=['doc_0', 'doc_2']\n", - "start=436 end=454 text='different vectors.' document_ids=['doc_0', 'doc_2']\n", + "start=0 end=15 text='Word embeddings' document_ids=['doc_0', 'doc_1']\n", + "start=29 end=76 text='associate words with lists of numbers (vectors)' document_ids=['doc_1']\n", + "start=85 end=144 text='similar words are associated with numbers that are close by' document_ids=['doc_1']\n", + "start=150 end=214 text='dissimilar words with numbers that are far away from each other.' document_ids=['doc_1']\n", + "start=215 end=234 text='Sentence embeddings' document_ids=['doc_0', 'doc_1', 'doc_2']\n", + "start=257 end=298 text='associate a vector to an entire sentence.' document_ids=['doc_0', 'doc_1', 'doc_2']\n", + "start=299 end=345 text='Similar sentences are assigned similar vectors' document_ids=['doc_0']\n", + "start=351 end=402 text='different sentences are assigned different vectors.' document_ids=['doc_0']\n", + "start=403 end=474 text='Each coordinate in the vector identifies some property of the sentence.' document_ids=['doc_0', 'doc_2']\n", "\n", "DOCUMENTS:\n", - "{'id': 'doc_0', 'text': 'In the previous chapters, you learned about word and sentence embeddings and similarity between words and sentences. In short, a word embedding is a way to associate words with lists of numbers (vectors) in such a way that similar words are associated with numbers that are close by, and dissimilar words with numbers that are far away from each other. A sentence embedding does the same thing, but associating a vector to every sentence. Similarity is a way to measure how similar two words (or', 'title': 'The Attention Mechanism', 'url': 'https://docs.cohere.com/docs/the-attention-mechanism'}\n", - "{'id': 'doc_1', 'text': 'Sentence embeddings\\n\\nSo word embeddings seem to be pretty useful, but in reality, human language is much more complicated than simply a bunch of words put together. Human language has structure, sentences, etc. How would one be able to represent, for instance, a sentence? Well, here’s an idea. How about the sums of scores of all the words? For example, say we have a word embedding that assigns the following scores to these words:\\n\\nNo: [1,0,0,0]\\n\\nI: [0,2,0,0]\\n\\nAm: [-1,0,1,0]\\n\\nGood: [0,0,1,3]', 'title': 'Text Embeddings', 'url': 'https://docs.cohere.com/docs/text-embeddings'}\n", - "{'id': 'doc_2', 'text': 'This is where sentence embeddings come into play. A sentence embedding is just like a word embedding, except it associates every sentence with a vector full of numbers, in a coherent way. By coherent, I mean that it satisfies similar properties as a word embedding. For instance, similar sentences are assigned to similar vectors, different sentences are assigned to different vectors, and most importantly, each of the coordinates of the vector identifies some (whether clear or obscure) property of', 'title': 'Text Embeddings', 'url': 'https://docs.cohere.com/docs/text-embeddings'}\n", + "{'id': 'doc_0', 'text': 'This is where sentence embeddings come into play. A sentence embedding is just like a word embedding, except it associates every sentence with a vector full of numbers, in a coherent way. By coherent, I mean that it satisfies similar properties as a word embedding. For instance, similar sentences are assigned to similar vectors, different sentences are assigned to different vectors, and most importantly, each of the coordinates of the vector identifies some (whether clear or obscure) property of', 'title': 'Text Embeddings', 'url': 'https://docs.cohere.com/docs/text-embeddings'}\n", + "{'id': 'doc_1', 'text': 'In the previous chapters, you learned about word and sentence embeddings and similarity between words and sentences. In short, a word embedding is a way to associate words with lists of numbers (vectors) in such a way that similar words are associated with numbers that are close by, and dissimilar words with numbers that are far away from each other. A sentence embedding does the same thing, but associating a vector to every sentence. Similarity is a way to measure how similar two words (or', 'title': 'The Attention Mechanism', 'url': 'https://docs.cohere.com/docs/the-attention-mechanism'}\n", + "{'id': 'doc_2', 'text': 'Sentence embeddings are even more powerful, as they assign a vector of numbers to each sentence, in a way that these numbers also carry important properties of the sentence. One of the Cohere embeddings assigns a vector of length 4096 (i.e., a list of 4096 numbers) to each sentence. Furthermore, multilingual embedding does this for sentences in more than 100 languages. In this way, the sentence “Hello, how are you?” and its corresponding French translation, “Bonjour, comment ça va?” will be', 'title': 'Similarity Between Words and Sentences', 'url': 'https://docs.cohere.com/docs/similarity-between-words-and-sentences'}\n", "\n", "----------------------------------------------------------------------------------------------------\n", "\n", "User: And what are their similarities\n", "Retrieving information...\n", "Chatbot:\n", - "The similarity between word and sentence embeddings lies in the fact that they both measure similarity between items. For example, if two sentences are very similar, their corresponding vectors will also be similar. This is best illustrated with an example: \n", - "\n", - "The similarities between the following sentences can be computed using sentence embeddings:\n", - "1. Who was the 16th president of the US and fought in the American Civil War?\n", - "2. The American Civil War saw the 16th President, Abraham Lincoln, attempt to preserve the Union.\n", - "3. Lincoln was the 16th president of the United States.\n", - "\n", - "The similarity between sentences 1 and 2 is 6738.2859, which is very high. On the other hand, the similarities between sentences 1 and 3, and 2 and 3, are much lower at -122.2267 and -3.4946 respectively.\n", + "Word and sentence embeddings both use vectors to represent words or sentences in a numerical space. The similarity between embeddings is calculated using a dot product. The similarity between an embedding and itself is 1, and the similarity between an irrelevant word or sentence and any other word or sentence is 0.\n", "\n", "CITATIONS:\n", - "start=84 end=102 text='measure similarity' document_ids=['doc_0', 'doc_2']\n", - "start=131 end=215 text='if two sentences are very similar, their corresponding vectors will also be similar.' document_ids=['doc_0']\n", - "start=589 end=638 text='similarity between sentences 1 and 2 is 6738.2859' document_ids=['doc_1']\n", - "start=683 end=734 text='similarities between sentences 1 and 3, and 2 and 3' document_ids=['doc_1']\n", - "start=754 end=775 text='-122.2267 and -3.4946' document_ids=['doc_1']\n", + "start=38 end=45 text='vectors' document_ids=['doc_0', 'doc_3', 'doc_4']\n", + "start=59 end=77 text='words or sentences' document_ids=['doc_0', 'doc_3', 'doc_4']\n", + "start=83 end=99 text='numerical space.' document_ids=['doc_0', 'doc_3', 'doc_4']\n", + "start=104 end=114 text='similarity' document_ids=['doc_5']\n", + "start=156 end=168 text='dot product.' document_ids=['doc_5']\n", + "start=173 end=220 text='similarity between an embedding and itself is 1' document_ids=['doc_0', 'doc_4']\n", + "start=230 end=316 text='similarity between an irrelevant word or sentence and any other word or sentence is 0.' document_ids=['doc_0', 'doc_4']\n", "\n", "DOCUMENTS:\n", - "{'id': 'doc_0', 'text': 'Notice that these sentences are all very similar. In particular, the three highlighted sentences pretty much have the same meaning. If you look at their corresponding vectors, these are also really similar. That is exactly what an embedding should do.', 'title': 'Text Embeddings', 'url': 'https://docs.cohere.com/docs/text-embeddings'}\n", - "{'id': 'doc_1', 'text': 'And the results are:\\n\\nThe similarity between sentences 1 and 2: 6738.2858668486715\\n\\nThe similarity between sentences 1 and 3: -122.22666955510499\\n\\nThe similarity between sentences 2 and 3: -3.494608113647928\\n\\nThese results certainly confirm our predictions. The similarity between sentences 1 and 2 is 6738, which is high. The similarities between sentences 1 and 3, and 2 and 3, are -122 and -3.5 (dot products are allowed to be negative too!), which are much lower.', 'title': 'Similarity Between Words and Sentences', 'url': 'https://docs.cohere.com/docs/similarity-between-words-and-sentences'}\n", - "{'id': 'doc_2', 'text': 'The similarity between each sentence and itself is 1 (the diagonal in the plot), which is consistent with our expectations. Furthermore, a sentence and itself represent the same point in space, which gives an angle of 0 with the origin, so it makes sense that the similarity is the cosine of 0, which is 1!\\n\\nConclusion', 'title': 'Similarity Between Words and Sentences', 'url': 'https://docs.cohere.com/docs/similarity-between-words-and-sentences'}\n", + "{'id': 'doc_0', 'text': 'But let me add some numbers to this reasoning to make it more clear. Imagine that we calculate similarities for the words in each sentence, and we get the following:\\n\\nThis similarity makes sense in the following ways:\\n\\nThe similarity between each word and itself is 1.\\n\\nThe similarity between any irrelevant word (“the”, “of”, etc.) and any other word is 0.\\n\\nThe similarity between “bank” and “river” is 0.11.\\n\\nThe similarity between “bank” and “money” is 0.25.', 'title': 'The Attention Mechanism', 'url': 'https://docs.cohere.com/docs/the-attention-mechanism'}\n", + "{'id': 'doc_3', 'text': 'assigned very similar numbers, as they have the same semantic meaning.', 'title': 'Similarity Between Words and Sentences', 'url': 'https://docs.cohere.com/docs/similarity-between-words-and-sentences'}\n", + "{'id': 'doc_4', 'text': 'But let me add some numbers to this reasoning to make it more clear. Imagine that we calculate similarities for the words in each sentence, and we get the following:\\n\\nThis similarity makes sense in the following ways:\\n\\nThe similarity between each word and itself is 1.\\n\\nThe similarity between any irrelevant word (“the”, “of”, etc.) and any other word is 0.\\n\\nThe similarity between “bank” and “river” is 0.11.\\n\\nThe similarity between “bank” and “money” is 0.25.', 'title': 'The Attention Mechanism', 'url': 'https://docs.cohere.com/docs/the-attention-mechanism'}\n", + "{'id': 'doc_5', 'text': 'Dot Product Similarity\\n\\nLet’s calculate the dot products between the three sentences. The following line of code will do it.\\n\\nAnd the results are:\\n\\nThe similarity between sentences 1 and 2 (0.8188) is much larger than the similarities between the other pairs. This confirms our predictions.\\n\\nJust for consistency, we also calculate the similarities between each sentence and itself, to confirm that a sentence and itself has the highest similarity score.', 'title': 'Similarity Between Words and Sentences', 'url': 'https://docs.cohere.com/docs/similarity-between-words-and-sentences'}\n", "\n", "----------------------------------------------------------------------------------------------------\n", "\n", "User: What do you know about 5G networks\n", "Retrieving information...\n", "Chatbot:\n", - "Unfortunately, I could not find any information about 5G networks in the available documentation. However, I can tell you about the 4G networks which have preceded 5G. 4G networks enable a high-speed connection and were designed to support a wide range of functions on mobile devices, including video streaming and high-quality music streaming. They also support a wider coverage area and better spectral efficiency, allowing more devices to connect simultaneously.\n", + "I'm sorry, but I do not have any information about 5G networks. Can I help you with anything else?\n", "----------------------------------------------------------------------------------------------------\n", "\n", - "User: quit\n", "Ending chat.\n" ] } @@ -879,16 +741,6 @@ "- **State management**: The endpoint maintains the state of the conversation via the `conversation_id` parameter, for example, by being able to correctly respond to a vague user message of \"And what are their similarities\".\n", "- **Response synthesis**: The model can decide if none of the retrieved documents provide the necessary information required to answer a user message. For example, when asked the question \"What do you know about 5G networks\", the chatbot goes on and retrieves external information from the index. However, it doesn’t use any of the information in its response as none of them is relevant to the question." ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "-JBWZVz9ObcV", - "metadata": { - "id": "-JBWZVz9ObcV" - }, - "outputs": [], - "source": [] } ], "metadata": { diff --git a/notebooks/llmu/RAG_with_Connectors.ipynb b/notebooks/llmu/RAG_with_Connectors.ipynb index 5c100ee9..0a4bd1c1 100644 --- a/notebooks/llmu/RAG_with_Connectors.ipynb +++ b/notebooks/llmu/RAG_with_Connectors.ipynb @@ -82,7 +82,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 1, "id": "90f134ba", "metadata": { "colab": { @@ -92,26 +92,7 @@ "id": "90f134ba", "outputId": "f2236cef-f274-4100-dbcd-333b826f5ee8" }, - "outputs": [ - { - "data": { - "text/html": [ - "\n", - " \n", - " " - ], - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - } - ], + "outputs": [], "source": [ "import uuid\n", "import cohere\n", @@ -123,7 +104,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 3, "id": "EavECqgqNJ8g", "metadata": { "cellView": "form", @@ -170,7 +151,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 7, "id": "e52d521d", "metadata": { "colab": { @@ -180,26 +161,7 @@ "id": "e52d521d", "outputId": "b0f90f1c-17c8-46fa-d471-b11059767ede" }, - "outputs": [ - { - "data": { - "text/html": [ - "\n", - " \n", - " " - ], - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - } - ], + "outputs": [], "source": [ "class Chatbot:\n", " def __init__(self, connectors: List[str]):\n", @@ -223,13 +185,13 @@ " if message.lower() == \"quit\":\n", " print(\"Ending chat.\")\n", " break\n", - " # else: # Uncomment for Google Colab to avoid printing the same thing twice\n", - " # print(f\"User: {message}\") # Uncomment for Google Colab to avoid printing the same thing twice\n", + " else: # If using Google Colab, remove this line to avoid printing the same thing twice\n", + " print(f\"User: {message}\") # If using Google Colab, remove this line to avoid printing the same thing twice\n", "\n", " # Generate response\n", " response = co.chat_stream(\n", " message=message,\n", - " model=\"command-r\",\n", + " model=\"command-r-plus\",\n", " conversation_id=self.conversation_id,\n", " connectors=self.connectors,\n", " )\n", @@ -245,8 +207,8 @@ " print(event.text, end=\"\")\n", " elif event.event_type == \"citation-generation\":\n", " citations.extend(event.citations)\n", - " elif event.event_type == \"search-results\":\n", - " cited_documents = event.documents\n", + " elif event.event_type == \"stream-end\":\n", + " cited_documents = event.response.documents\n", "\n", " # Display citations and source documents\n", " if citations:\n", @@ -257,7 +219,7 @@ " print(\"\\nDOCUMENTS:\")\n", " for document in cited_documents:\n", " print({'id': document['id'],\n", - " 'snippet': document['snippet'][:50] + '...',\n", + " 'snippet': document['snippet'][:400] + '...',\n", " 'title': document['title'],\n", " 'url': document['url']})\n", "\n", @@ -292,7 +254,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 8, "id": "99e5005b", "metadata": { "colab": { @@ -303,24 +265,6 @@ "outputId": "4609e72c-df6f-4c77-8132-cc0e73b80eee" }, "outputs": [ - { - "data": { - "text/html": [ - "\n", - " \n", - " " - ], - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, { "name": "stdout", "output_type": "stream", @@ -328,42 +272,25 @@ "User: What is Cohere's LLM University\n", "\n", "Chatbot:\n", - "LLM University, offered by Cohere, is a set of comprehensive learning resources for anyone interested in Natural Language Processing (NLP), from beginners to advanced learners. The curriculum aims to provide a solid foundation in NLP and equips learners with the skills needed to develop their own AI applications.\n", - "The course covers various topics, including semantic search, generation, classification, embeddings, and other NLP techniques. Learners can explore these concepts through hands-on exercises and practical code examples.\n", - "Join the Discord community to connect with other learners and access the latest updates!\n", + "Cohere's LLM University (LLMU) is a set of comprehensive learning resources for anyone interested in natural language processing (NLP), from beginners to advanced learners. The curriculum covers everything from the basics of LLMs to the most advanced topics, including generative AI. The course is designed to give learners a solid foundation in NLP and help them develop their own applications.\n", "\n", "CITATIONS:\n", - "start=27 end=33 text='Cohere' document_ids=['web-search_0', 'web-search_1']\n", - "start=47 end=79 text='comprehensive learning resources' document_ids=['web-search_1']\n", - "start=105 end=132 text='Natural Language Processing' document_ids=['web-search_0', 'web-search_1']\n", - "start=133 end=138 text='(NLP)' document_ids=['web-search_0', 'web-search_1']\n", - "start=145 end=176 text='beginners to advanced learners.' document_ids=['web-search_0', 'web-search_1']\n", - "start=181 end=191 text='curriculum' document_ids=['web-search_0', 'web-search_1']\n", - "start=210 end=233 text='solid foundation in NLP' document_ids=['web-search_0', 'web-search_1']\n", - "start=263 end=314 text='skills needed to develop their own AI applications.' document_ids=['web-search_0', 'web-search_1']\n", - "start=359 end=374 text='semantic search' document_ids=['web-search_0', 'web-search_1']\n", - "start=376 end=386 text='generation' document_ids=['web-search_0', 'web-search_1']\n", - "start=388 end=402 text='classification' document_ids=['web-search_0', 'web-search_1']\n", - "start=404 end=414 text='embeddings' document_ids=['web-search_0', 'web-search_1']\n", - "start=420 end=441 text='other NLP techniques.' document_ids=['web-search_0', 'web-search_1']\n", - "start=486 end=504 text='hands-on exercises' document_ids=['web-search_0', 'web-search_1']\n", - "start=509 end=533 text='practical code examples.' document_ids=['web-search_0', 'web-search_1']\n", - "start=543 end=560 text='Discord community' document_ids=['web-search_0', 'web-search_1']\n", + "start=24 end=30 text='(LLMU)' document_ids=['web-search_0', 'web-search_1']\n", + "start=36 end=75 text='set of comprehensive learning resources' document_ids=['web-search_1']\n", + "start=101 end=134 text='natural language processing (NLP)' document_ids=['web-search_0', 'web-search_1']\n", + "start=141 end=172 text='beginners to advanced learners.' document_ids=['web-search_0', 'web-search_1']\n", + "start=177 end=187 text='curriculum' document_ids=['web-search_0', 'web-search_1']\n", + "start=215 end=229 text='basics of LLMs' document_ids=['web-search_0', 'web-search_1']\n", + "start=237 end=283 text='most advanced topics, including generative AI.' document_ids=['web-search_1']\n", + "start=326 end=349 text='solid foundation in NLP' document_ids=['web-search_0', 'web-search_1']\n", + "start=364 end=395 text='develop their own applications.' document_ids=['web-search_0', 'web-search_1']\n", "\n", "DOCUMENTS:\n", - "{'id': 'web-search_0', 'snippet': 'Guides and ConceptsAPI ReferenceRelease NotesAppli...', 'title': 'LLM University (LLMU) | Cohere', 'url': 'https://docs.cohere.com/docs/llmu'}\n", - "{'id': 'web-search_1', 'snippet': 'Introducing LLM University — Your Go-To Learning R...', 'title': 'Introducing LLM University — Your Go-To Learning Resource for NLP🎓', 'url': 'https://txt.cohere.com/llm-university/'}\n", - "{'id': 'web-search_2', 'snippet': 'Skip to main content\\n\\nMadras High Court Reads Down...', 'title': 'LawBeat | Madras High Court Reads Down University Admission Rule Mandating 2-Yr LLM for PhD Admission', 'url': 'https://lawbeat.in/news-updates/madras-high-court-reads-down-university-admission-rule-mandating-2-yr-llm-phd-admission'}\n", - "{'id': 'web-search_3', 'snippet': 'Take your legal expertise to the next level with a...', 'title': 'LLM Program', 'url': 'https://www.law.umaryland.edu/academics/llm-program/'}\n", - "{'id': 'web-search_4', 'snippet': \"The People's Network\\n\\nSign In with Facebook\\n\\nBy cl...\", 'title': 'Revolutionizing AI: University of Michigan and Apple Team Up to Boost LLM Efficiency', 'url': 'https://bnnbreaking.com/world/us/revolutionizing-ai-university-of-michigan-and-apple-team-up-to-boost-llm-efficiency'}\n", - "{'id': 'web-search_5', 'snippet': 'Ministers urged to tackle “damaging” trial delays ...', 'title': 'LLM Master of Laws (General) Degree | University of Law', 'url': 'https://www.law.ac.uk/study/postgraduate/law/llm-master-of-laws-general/'}\n", - "{'id': 'web-search_6', 'snippet': \"The People's Network\\n\\nSign In with Facebook\\n\\nBy cl...\", 'title': \"Tsinghua University's Ouroboros Framework: Revolutionizing LLM Inference Speed by 2.8x\", 'url': 'https://bnnbreaking.com/world/china/tsinghua-universitys-ouroboros-framework-revolutionizing-llm-inference-speed-by-28x'}\n", - "{'id': 'web-search_7', 'snippet': 'Skip to main content\\n\\nSupport the Law School\\n\\nCons...', 'title': 'LLM & Graduate Programs • Graduate Admissions • Penn Carey Law', 'url': 'https://www.law.upenn.edu/admissions/grad/'}\n", - "{'id': 'web-search_8', 'snippet': 'Skip to navigation | Skip to main content | Skip t...', 'title': 'LLM Law (2024 entry) | The University of Manchester', 'url': 'https://www.manchester.ac.uk/study/masters/courses/list/08446/llm-law/'}\n", + "{'id': 'web-search_0', 'snippet': 'Guides and ConceptsAPI ReferenceRelease NotesApplication ExamplesLLMU\\n\\nCoralDashboardDocumentationPlaygroundCommunityLog In\\n\\nCoralDashboardDocumentationPlaygroundCommunityLog In\\n\\nWelcome to LLM University!\\n\\nWelcome to LLM University by Cohere!\\n\\nWe’re so happy that you’ve chosen to learn Natural Language Processing and Large Language Models with us.\\n\\nOur comprehensive curriculum aims to give you a ...', 'title': 'LLM University (LLMU) | Cohere', 'url': 'https://docs.cohere.com/docs/llmu'}\n", + "{'id': 'web-search_1', 'snippet': \"Introducing LLM University — Your Go-To Learning Resource for NLP🎓\\n\\nDiscover our comprehensive NLP curriculum at LLM University. From the fundamentals of LLMs all the way to the most advanced topics, including generative AI\\n\\nWe're excited to announce the launch of LLM University (LLMU), a set of comprehensive learning resources for anyone interested in natural language processing (NLP), from begin...\", 'title': 'Introducing LLM University — Your Go-To Learning Resource for NLP🎓', 'url': 'https://txt.cohere.com/llm-university/'}\n", "\n", "----------------------------------------------------------------------------------------------------\n", "\n", - "User: quit\n", "Ending chat.\n" ] } @@ -378,16 +305,6 @@ "# Run the chatbot\n", "chatbot.run()" ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "7BOGEfHGPrCX", - "metadata": { - "id": "7BOGEfHGPrCX" - }, - "outputs": [], - "source": [] } ], "metadata": { @@ -409,7 +326,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.11.5" + "version": "3.11.4" } }, "nbformat": 4, diff --git a/notebooks/llmu/RAG_with_Quickstart_Connectors.ipynb b/notebooks/llmu/RAG_with_Quickstart_Connectors.ipynb index 99600d12..8e90548f 100644 --- a/notebooks/llmu/RAG_with_Quickstart_Connectors.ipynb +++ b/notebooks/llmu/RAG_with_Quickstart_Connectors.ipynb @@ -73,30 +73,6 @@ "# Setup" ] }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "cellView": "form", - "id": "eTLUIobOgd_N" - }, - "outputs": [], - "source": [ - "#@title Enable text wrapping in Google Colab\n", - "\n", - "from IPython.display import HTML, display\n", - "\n", - "def set_css():\n", - " display(HTML('''\n", - " \n", - " '''))\n", - "get_ipython().events.register('pre_run_cell', set_css)" - ] - }, { "cell_type": "code", "execution_count": null, @@ -126,7 +102,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 1, "metadata": { "id": "P1wBvi56Rnmv" }, @@ -134,13 +110,33 @@ "source": [ "import cohere\n", "from cohere import ChatConnector\n", - "import os\n", "import uuid\n", "from typing import List\n", "\n", "co = cohere.Client(\"COHERE_API_KEY\") # Get your API key here: https://dashboard.cohere.com/api-keys" ] }, + { + "cell_type": "code", + "execution_count": 3, + "metadata": {}, + "outputs": [], + "source": [ + "#@title Enable text wrapping in Google Colab\n", + "\n", + "from IPython.display import HTML, display\n", + "\n", + "def set_css():\n", + " display(HTML('''\n", + " \n", + " '''))\n", + "get_ipython().events.register('pre_run_cell', set_css)" + ] + }, { "cell_type": "markdown", "metadata": { @@ -258,7 +254,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 3, "metadata": { "colab": { "base_uri": "https://localhost:8080/", @@ -267,26 +263,7 @@ "id": "PB5kpanxRnmw", "outputId": "ee14bf68-0921-41c8-b6c3-bc652a107a4d" }, - "outputs": [ - { - "data": { - "text/html": [ - "\n", - " \n", - " " - ], - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - } - ], + "outputs": [], "source": [ "class Chatbot:\n", " def __init__(self, connectors: List[str]):\n", @@ -310,13 +287,13 @@ " if message.lower() == \"quit\":\n", " print(\"Ending chat.\")\n", " break\n", - " # else: # Uncomment for Google Colab to avoid printing the same thing twice\n", - " # print(f\"User: {message}\") # Uncomment for Google Colab to avoid printing the same thing twice\n", + " else: # If using Google Colab, remove this line to avoid printing the same thing twice\n", + " print(f\"User: {message}\") # If using Google Colab, remove this line to avoid printing the same thing twice\n", "\n", " # Generate response\n", " response = co.chat_stream(\n", " message=message,\n", - " model=\"command-r\",\n", + " model=\"command-r-plus\",\n", " conversation_id=self.conversation_id,\n", " connectors=self.connectors,\n", " )\n", @@ -332,8 +309,8 @@ " print(event.text, end=\"\")\n", " elif event.event_type == \"citation-generation\":\n", " citations.extend(event.citations)\n", - " elif event.event_type == \"search-results\":\n", - " cited_documents = event.documents\n", + " elif event.event_type == \"stream-end\":\n", + " cited_documents = event.response.documents\n", "\n", " # Display citations and source documents\n", " if citations:\n", @@ -375,7 +352,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 8, "metadata": { "colab": { "base_uri": "https://localhost:8080/", @@ -385,24 +362,6 @@ "outputId": "4e157035-f0cc-4f68-cde2-2404d92b230a" }, "outputs": [ - { - "data": { - "text/html": [ - "\n", - " \n", - " " - ], - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, { "name": "stdout", "output_type": "stream", @@ -410,51 +369,32 @@ "User: What is prompt engineering\n", "\n", "Chatbot:\n", - "Prompt engineering pertains to the practice of constructing prompts to elicit desired responses from large language models (LLMs). Prompts can be constructed in various ways, such as adding specific details, providing instructions, or incorporating output format requirements. Different types of prompts are suited to different use cases. For instance, sequential prompting is common, especially when a task involves multiple subtasks.\n", - "\n", - "Various techniques can be applied when constructing prompts. One example is prompt chaining, which involves running several prompts in a sequence or parallel to accomplish a goal. Additionally, the structure of prompts can be engineered to comply with specific requirements, ensuring that LLM outputs are safe, ethical and privacy-preserving.\n", - "\n", - "Prompt engineering also covers the evaluation of LLM outputs. Evaluations are essential to ensure the quality and accuracy of the outputs, which can be probabilistic and vary for the same prompt. Techniques for evaluating LLMs include real user feedback, human evaluation, LLM-generated evaluation and word-level metrics.\n", + "Prompt engineering is a process of constructing prompts for a large language model (LLM) to get a desired response. It involves techniques such as adding specific details, context, output format requirements, and examples to a prompt to guide the LLM's response. Prompt engineering also involves chaining multiple prompts together to accomplish a complex task and validating and evaluating the outputs of LLMs to ensure the quality and accuracy of the responses.\n", "\n", "CITATIONS:\n", - "start=47 end=67 text='constructing prompts' document_ids=['demo-conn-gdrive-6bfrp6_0', 'demo-conn-gdrive-6bfrp6_4']\n", - "start=71 end=95 text='elicit desired responses' document_ids=['demo-conn-gdrive-6bfrp6_0', 'demo-conn-gdrive-6bfrp6_4']\n", - "start=101 end=122 text='large language models' document_ids=['demo-conn-gdrive-6bfrp6_0', 'demo-conn-gdrive-6bfrp6_3', 'demo-conn-gdrive-6bfrp6_4']\n", - "start=123 end=129 text='(LLMs)' document_ids=['demo-conn-gdrive-6bfrp6_0', 'demo-conn-gdrive-6bfrp6_3', 'demo-conn-gdrive-6bfrp6_4']\n", - "start=190 end=206 text='specific details' document_ids=['demo-conn-gdrive-6bfrp6_0', 'demo-conn-gdrive-6bfrp6_2']\n", - "start=208 end=230 text='providing instructions' document_ids=['demo-conn-gdrive-6bfrp6_0']\n", - "start=249 end=276 text='output format requirements.' document_ids=['demo-conn-gdrive-6bfrp6_0', 'demo-conn-gdrive-6bfrp6_2']\n", - "start=277 end=296 text='Different use cases' document_ids=['demo-conn-gdrive-6bfrp6_2']\n", - "start=317 end=344 text='different types of prompts.' document_ids=['demo-conn-gdrive-6bfrp6_2']\n", - "start=359 end=379 text='sequential prompting' document_ids=['demo-conn-gdrive-6bfrp6_1']\n", - "start=423 end=441 text='multiple subtasks.' document_ids=['demo-conn-gdrive-6bfrp6_1']\n", - "start=451 end=461 text='techniques' document_ids=['demo-conn-gdrive-6bfrp6_0', 'demo-conn-gdrive-6bfrp6_1', 'demo-conn-gdrive-6bfrp6_2', 'demo-conn-gdrive-6bfrp6_3', 'demo-conn-gdrive-6bfrp6_4']\n", - "start=519 end=534 text='prompt chaining' document_ids=['demo-conn-gdrive-6bfrp6_1']\n", - "start=580 end=588 text='sequence' document_ids=['demo-conn-gdrive-6bfrp6_1']\n", - "start=592 end=600 text='parallel' document_ids=['demo-conn-gdrive-6bfrp6_1']\n", - "start=604 end=622 text='accomplish a goal.' document_ids=['demo-conn-gdrive-6bfrp6_1']\n", - "start=683 end=716 text='comply with specific requirements' document_ids=['demo-conn-gdrive-6bfrp6_4']\n", - "start=748 end=785 text='safe, ethical and privacy-preserving.' document_ids=['demo-conn-gdrive-6bfrp6_4']\n", - "start=822 end=832 text='evaluation' document_ids=['demo-conn-gdrive-6bfrp6_3']\n", - "start=878 end=896 text='ensure the quality' document_ids=['demo-conn-gdrive-6bfrp6_3']\n", - "start=901 end=909 text='accuracy' document_ids=['demo-conn-gdrive-6bfrp6_0']\n", - "start=939 end=952 text='probabilistic' document_ids=['demo-conn-gdrive-6bfrp6_3', 'demo-conn-gdrive-6bfrp6_4']\n", - "start=957 end=982 text='vary for the same prompt.' document_ids=['demo-conn-gdrive-6bfrp6_3']\n", - "start=1022 end=1040 text='real user feedback' document_ids=['demo-conn-gdrive-6bfrp6_3']\n", - "start=1042 end=1058 text='human evaluation' document_ids=['demo-conn-gdrive-6bfrp6_3']\n", - "start=1060 end=1084 text='LLM-generated evaluation' document_ids=['demo-conn-gdrive-6bfrp6_3']\n", - "start=1089 end=1108 text='word-level metrics.' document_ids=['demo-conn-gdrive-6bfrp6_3']\n", + "start=35 end=55 text='constructing prompts' document_ids=['demo-conn-gdrive-6bfrp6_0', 'demo-conn-gdrive-6bfrp6_1', 'demo-conn-gdrive-6bfrp6_2', 'demo-conn-gdrive-6bfrp6_3', 'demo-conn-gdrive-6bfrp6_4']\n", + "start=62 end=88 text='large language model (LLM)' document_ids=['demo-conn-gdrive-6bfrp6_0', 'demo-conn-gdrive-6bfrp6_3']\n", + "start=98 end=115 text='desired response.' document_ids=['demo-conn-gdrive-6bfrp6_0', 'demo-conn-gdrive-6bfrp6_3']\n", + "start=154 end=170 text='specific details' document_ids=['demo-conn-gdrive-6bfrp6_0']\n", + "start=172 end=179 text='context' document_ids=['demo-conn-gdrive-6bfrp6_0']\n", + "start=181 end=207 text='output format requirements' document_ids=['demo-conn-gdrive-6bfrp6_0']\n", + "start=213 end=221 text='examples' document_ids=['demo-conn-gdrive-6bfrp6_0']\n", + "start=237 end=262 text=\"guide the LLM's response.\" document_ids=['demo-conn-gdrive-6bfrp6_0']\n", + "start=296 end=330 text='chaining multiple prompts together' document_ids=['demo-conn-gdrive-6bfrp6_1']\n", + "start=364 end=374 text='validating' document_ids=['demo-conn-gdrive-6bfrp6_3']\n", + "start=379 end=389 text='evaluating' document_ids=['demo-conn-gdrive-6bfrp6_4']\n", + "start=424 end=431 text='quality' document_ids=['demo-conn-gdrive-6bfrp6_4']\n", + "start=436 end=444 text='accuracy' document_ids=['demo-conn-gdrive-6bfrp6_3']\n", "\n", "DOCUMENTS:\n", "{'id': 'demo-conn-gdrive-6bfrp6_0', 'text': \"\\ufeffConstructing Prompts\\r\\nIn this chapter, you'll lea...\"}\n", "{'id': 'demo-conn-gdrive-6bfrp6_1', 'text': \"\\ufeffChaining Prompts\\r\\nIn this chapter, you'll learn a...\"}\n", "{'id': 'demo-conn-gdrive-6bfrp6_2', 'text': \"\\ufeffUse Case Patterns\\r\\nIn this chapter, you'll learn ...\"}\n", - "{'id': 'demo-conn-gdrive-6bfrp6_3', 'text': \"\\ufeffEvaluating Outputs\\r\\nIn this chapter, you'll learn...\"}\n", - "{'id': 'demo-conn-gdrive-6bfrp6_4', 'text': \"\\ufeffValidating Outputs\\r\\nIn this chapter, you'll learn...\"}\n", + "{'id': 'demo-conn-gdrive-6bfrp6_3', 'text': \"\\ufeffValidating Outputs\\r\\nIn this chapter, you'll learn...\"}\n", + "{'id': 'demo-conn-gdrive-6bfrp6_4', 'text': \"\\ufeffEvaluating Outputs\\r\\nIn this chapter, you'll learn...\"}\n", "\n", "----------------------------------------------------------------------------------------------------\n", "\n", - "User: quit\n", "Ending chat.\n" ] } @@ -473,9 +413,7 @@ { "cell_type": "code", "execution_count": null, - "metadata": { - "id": "Bb3CfkX1YHeW" - }, + "metadata": {}, "outputs": [], "source": [] } @@ -487,7 +425,7 @@ "kernelspec": { "display_name": "base", "language": "python", - "name": "base" + "name": "python3" }, "language_info": { "codemirror_mode": { diff --git a/notebooks/llmu/Validating_Large_Language_Model_Outputs.ipynb b/notebooks/llmu/Validating_Large_Language_Model_Outputs.ipynb index 43ab460a..a78cf359 100644 --- a/notebooks/llmu/Validating_Large_Language_Model_Outputs.ipynb +++ b/notebooks/llmu/Validating_Large_Language_Model_Outputs.ipynb @@ -2,7 +2,9 @@ "cells": [ { "cell_type": "markdown", - "metadata": {}, + "metadata": { + "id": "XtZlzMox9SLA" + }, "source": [ "\n", " \"Open\n", @@ -24,11 +26,11 @@ "id": "npA3DLlzq6f8" }, "source": [ - "This is the notebook companion for the blog post [Validating Large Language Model Outputs](https://txt.cohere.ai/validating-llm-outputs).\n", - "\n", "One key property of LLMs that’s different from traditional software is that the output is probabilistic in nature. The same input (i.e., the prompt) may not always produce the same response. While this property makes it possible to build entirely new classes of natural language applications, it also means that those applications require a mechanism for validating their outputs.\n", "\n", - "An output validation step ensures that an LLM application is robust and predictable. In this article, we looked at what output validation is and how to implement it using [Guardrails AI](http://getguardrails.ai/).\n" + "An output validation step ensures that an LLM application is robust and predictable. In this article, we looked at what output validation is and how to implement it using [Guardrails AI](https://www.guardrailsai.com/).\n", + "\n", + "Read the accompanying [article here](https://docs.cohere.com/docs/validating-outputs).\n" ] }, { @@ -42,34 +44,175 @@ }, { "cell_type": "code", - "execution_count": 24, + "execution_count": null, "metadata": { - "id": "9_ivCnkrgHIx" + "colab": { + "base_uri": "https://localhost:8080/" + }, + "id": "9_ivCnkrgHIx", + "outputId": "95ef1aec-1ba2-49cd-e8f1-2cb10a824729" }, - "outputs": [], + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Collecting git+https://github.com/guardrails-ai/guardrails.git@main\n", + " Cloning https://github.com/guardrails-ai/guardrails.git (to revision main) to /tmp/pip-req-build-vcruxxjc\n", + " Running command git clone --filter=blob:none --quiet https://github.com/guardrails-ai/guardrails.git /tmp/pip-req-build-vcruxxjc\n", + " Resolved https://github.com/guardrails-ai/guardrails.git to commit 6de5641b8f269164cd57cd95f32dacb9e7d83537\n", + " Installing build dependencies ... \u001b[?25l\u001b[?25hdone\n", + " Getting requirements to build wheel ... \u001b[?25l\u001b[?25hdone\n", + " Preparing metadata (pyproject.toml) ... \u001b[?25l\u001b[?25hdone\n", + "Requirement already satisfied: cohere in /usr/local/lib/python3.10/dist-packages (5.2.6)\n", + "Requirement already satisfied: fastavro<2.0.0,>=1.9.4 in /usr/local/lib/python3.10/dist-packages (from cohere) (1.9.4)\n", + "Requirement already satisfied: httpx>=0.21.2 in /usr/local/lib/python3.10/dist-packages (from cohere) (0.27.0)\n", + "Requirement already satisfied: pydantic>=1.9.2 in /usr/local/lib/python3.10/dist-packages (from cohere) (2.6.4)\n", + "Requirement already satisfied: requests<3.0.0,>=2.0.0 in /usr/local/lib/python3.10/dist-packages (from cohere) (2.31.0)\n", + "Requirement already satisfied: tokenizers<0.16.0,>=0.15.2 in /usr/local/lib/python3.10/dist-packages (from cohere) (0.15.2)\n", + "Requirement already satisfied: types-requests<3.0.0,>=2.0.0 in /usr/local/lib/python3.10/dist-packages (from cohere) (2.31.0.20240406)\n", + "Requirement already satisfied: typing_extensions>=4.0.0 in /usr/local/lib/python3.10/dist-packages (from cohere) (4.11.0)\n", + "Requirement already satisfied: coloredlogs<16.0.0,>=15.0.1 in /usr/local/lib/python3.10/dist-packages (from guardrails-ai==0.4.3) (15.0.1)\n", + "Requirement already satisfied: griffe<0.37.0,>=0.36.9 in /usr/local/lib/python3.10/dist-packages (from guardrails-ai==0.4.3) (0.36.9)\n", + "Requirement already satisfied: guardrails-api-client<0.2.0,>=0.1.1 in /usr/local/lib/python3.10/dist-packages (from guardrails-ai==0.4.3) (0.1.1)\n", + "Requirement already satisfied: jwt<2.0.0,>=1.3.1 in /usr/local/lib/python3.10/dist-packages (from guardrails-ai==0.4.3) (1.3.1)\n", + "Requirement already satisfied: langchain-core<0.2.0,>=0.1.18 in /usr/local/lib/python3.10/dist-packages (from guardrails-ai==0.4.3) (0.1.42)\n", + "Requirement already satisfied: lxml<5.0.0,>=4.9.3 in /usr/local/lib/python3.10/dist-packages (from guardrails-ai==0.4.3) (4.9.4)\n", + "Requirement already satisfied: openai<2 in /usr/local/lib/python3.10/dist-packages (from guardrails-ai==0.4.3) (1.17.1)\n", + "Requirement already satisfied: opentelemetry-exporter-otlp-proto-grpc==1.20.0 in /usr/local/lib/python3.10/dist-packages (from guardrails-ai==0.4.3) (1.20.0)\n", + "Requirement already satisfied: opentelemetry-exporter-otlp-proto-http==1.20.0 in /usr/local/lib/python3.10/dist-packages (from guardrails-ai==0.4.3) (1.20.0)\n", + "Requirement already satisfied: opentelemetry-sdk==1.20.0 in /usr/local/lib/python3.10/dist-packages (from guardrails-ai==0.4.3) (1.20.0)\n", + "Requirement already satisfied: pydash<8.0.0,>=7.0.6 in /usr/local/lib/python3.10/dist-packages (from guardrails-ai==0.4.3) (7.0.7)\n", + "Requirement already satisfied: python-dateutil<3.0.0,>=2.8.2 in /usr/local/lib/python3.10/dist-packages (from guardrails-ai==0.4.3) (2.8.2)\n", + "Requirement already satisfied: regex<2024.0.0,>=2023.10.3 in /usr/local/lib/python3.10/dist-packages (from guardrails-ai==0.4.3) (2023.12.25)\n", + "Requirement already satisfied: rich<14.0.0,>=13.6.0 in /usr/local/lib/python3.10/dist-packages (from guardrails-ai==0.4.3) (13.7.1)\n", + "Requirement already satisfied: rstr<4.0.0,>=3.2.2 in /usr/local/lib/python3.10/dist-packages (from guardrails-ai==0.4.3) (3.2.2)\n", + "Requirement already satisfied: tenacity>=8.1.0 in /usr/local/lib/python3.10/dist-packages (from guardrails-ai==0.4.3) (8.2.3)\n", + "Requirement already satisfied: tiktoken<0.6.0,>=0.5.1 in /usr/local/lib/python3.10/dist-packages (from guardrails-ai==0.4.3) (0.5.2)\n", + "Requirement already satisfied: typer[all]<0.10.0,>=0.9.0 in /usr/local/lib/python3.10/dist-packages (from guardrails-ai==0.4.3) (0.9.4)\n", + "Requirement already satisfied: backoff<3.0.0,>=1.10.0 in /usr/local/lib/python3.10/dist-packages (from opentelemetry-exporter-otlp-proto-grpc==1.20.0->guardrails-ai==0.4.3) (2.2.1)\n", + "Requirement already satisfied: deprecated>=1.2.6 in /usr/local/lib/python3.10/dist-packages (from opentelemetry-exporter-otlp-proto-grpc==1.20.0->guardrails-ai==0.4.3) (1.2.14)\n", + "Requirement already satisfied: googleapis-common-protos~=1.52 in /usr/local/lib/python3.10/dist-packages (from opentelemetry-exporter-otlp-proto-grpc==1.20.0->guardrails-ai==0.4.3) (1.63.0)\n", + "Requirement already satisfied: grpcio<2.0.0,>=1.0.0 in /usr/local/lib/python3.10/dist-packages (from opentelemetry-exporter-otlp-proto-grpc==1.20.0->guardrails-ai==0.4.3) (1.62.1)\n", + "Requirement already satisfied: opentelemetry-api~=1.15 in /usr/local/lib/python3.10/dist-packages (from opentelemetry-exporter-otlp-proto-grpc==1.20.0->guardrails-ai==0.4.3) (1.20.0)\n", + "Requirement already satisfied: opentelemetry-exporter-otlp-proto-common==1.20.0 in /usr/local/lib/python3.10/dist-packages (from opentelemetry-exporter-otlp-proto-grpc==1.20.0->guardrails-ai==0.4.3) (1.20.0)\n", + "Requirement already satisfied: opentelemetry-proto==1.20.0 in /usr/local/lib/python3.10/dist-packages (from opentelemetry-exporter-otlp-proto-grpc==1.20.0->guardrails-ai==0.4.3) (1.20.0)\n", + "Requirement already satisfied: opentelemetry-semantic-conventions==0.41b0 in /usr/local/lib/python3.10/dist-packages (from opentelemetry-sdk==1.20.0->guardrails-ai==0.4.3) (0.41b0)\n", + "Requirement already satisfied: importlib-metadata<7.0,>=6.0 in /usr/local/lib/python3.10/dist-packages (from opentelemetry-api~=1.15->opentelemetry-exporter-otlp-proto-grpc==1.20.0->guardrails-ai==0.4.3) (6.11.0)\n", + "Requirement already satisfied: protobuf<5.0,>=3.19 in /usr/local/lib/python3.10/dist-packages (from opentelemetry-proto==1.20.0->opentelemetry-exporter-otlp-proto-grpc==1.20.0->guardrails-ai==0.4.3) (3.20.3)\n", + "Requirement already satisfied: humanfriendly>=9.1 in /usr/local/lib/python3.10/dist-packages (from coloredlogs<16.0.0,>=15.0.1->guardrails-ai==0.4.3) (10.0)\n", + "Requirement already satisfied: colorama>=0.4 in /usr/local/lib/python3.10/dist-packages (from griffe<0.37.0,>=0.36.9->guardrails-ai==0.4.3) (0.4.6)\n", + "Requirement already satisfied: attrs>=21.3.0 in /usr/local/lib/python3.10/dist-packages (from guardrails-api-client<0.2.0,>=0.1.1->guardrails-ai==0.4.3) (23.2.0)\n", + "Requirement already satisfied: anyio in /usr/local/lib/python3.10/dist-packages (from httpx>=0.21.2->cohere) (3.7.1)\n", + "Requirement already satisfied: certifi in /usr/local/lib/python3.10/dist-packages (from httpx>=0.21.2->cohere) (2024.2.2)\n", + "Requirement already satisfied: httpcore==1.* in /usr/local/lib/python3.10/dist-packages (from httpx>=0.21.2->cohere) (1.0.5)\n", + "Requirement already satisfied: idna in /usr/local/lib/python3.10/dist-packages (from httpx>=0.21.2->cohere) (3.6)\n", + "Requirement already satisfied: sniffio in /usr/local/lib/python3.10/dist-packages (from httpx>=0.21.2->cohere) (1.3.1)\n", + "Requirement already satisfied: h11<0.15,>=0.13 in /usr/local/lib/python3.10/dist-packages (from httpcore==1.*->httpx>=0.21.2->cohere) (0.14.0)\n", + "Requirement already satisfied: cryptography!=3.4.0,>=3.1 in /usr/local/lib/python3.10/dist-packages (from jwt<2.0.0,>=1.3.1->guardrails-ai==0.4.3) (42.0.5)\n", + "Requirement already satisfied: PyYAML>=5.3 in /usr/local/lib/python3.10/dist-packages (from langchain-core<0.2.0,>=0.1.18->guardrails-ai==0.4.3) (6.0.1)\n", + "Requirement already satisfied: jsonpatch<2.0,>=1.33 in /usr/local/lib/python3.10/dist-packages (from langchain-core<0.2.0,>=0.1.18->guardrails-ai==0.4.3) (1.33)\n", + "Requirement already satisfied: langsmith<0.2.0,>=0.1.0 in /usr/local/lib/python3.10/dist-packages (from langchain-core<0.2.0,>=0.1.18->guardrails-ai==0.4.3) (0.1.47)\n", + "Requirement already satisfied: packaging<24.0,>=23.2 in /usr/local/lib/python3.10/dist-packages (from langchain-core<0.2.0,>=0.1.18->guardrails-ai==0.4.3) (23.2)\n", + "Requirement already satisfied: distro<2,>=1.7.0 in /usr/lib/python3/dist-packages (from openai<2->guardrails-ai==0.4.3) (1.7.0)\n", + "Requirement already satisfied: tqdm>4 in /usr/local/lib/python3.10/dist-packages (from openai<2->guardrails-ai==0.4.3) (4.66.2)\n", + "Requirement already satisfied: annotated-types>=0.4.0 in /usr/local/lib/python3.10/dist-packages (from pydantic>=1.9.2->cohere) (0.6.0)\n", + "Requirement already satisfied: pydantic-core==2.16.3 in /usr/local/lib/python3.10/dist-packages (from pydantic>=1.9.2->cohere) (2.16.3)\n", + "Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.10/dist-packages (from python-dateutil<3.0.0,>=2.8.2->guardrails-ai==0.4.3) (1.16.0)\n", + "Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.10/dist-packages (from requests<3.0.0,>=2.0.0->cohere) (3.3.2)\n", + "Requirement already satisfied: urllib3<3,>=1.21.1 in /usr/local/lib/python3.10/dist-packages (from requests<3.0.0,>=2.0.0->cohere) (2.0.7)\n", + "Requirement already satisfied: markdown-it-py>=2.2.0 in /usr/local/lib/python3.10/dist-packages (from rich<14.0.0,>=13.6.0->guardrails-ai==0.4.3) (3.0.0)\n", + "Requirement already satisfied: pygments<3.0.0,>=2.13.0 in /usr/local/lib/python3.10/dist-packages (from rich<14.0.0,>=13.6.0->guardrails-ai==0.4.3) (2.16.1)\n", + "Requirement already satisfied: huggingface_hub<1.0,>=0.16.4 in /usr/local/lib/python3.10/dist-packages (from tokenizers<0.16.0,>=0.15.2->cohere) (0.20.3)\n", + "Requirement already satisfied: click<9.0.0,>=7.1.1 in /usr/local/lib/python3.10/dist-packages (from typer[all]<0.10.0,>=0.9.0->guardrails-ai==0.4.3) (8.1.7)\n", + "Requirement already satisfied: shellingham<2.0.0,>=1.3.0 in /usr/local/lib/python3.10/dist-packages (from typer[all]<0.10.0,>=0.9.0->guardrails-ai==0.4.3) (1.5.4)\n", + "Requirement already satisfied: exceptiongroup in /usr/local/lib/python3.10/dist-packages (from anyio->httpx>=0.21.2->cohere) (1.2.0)\n", + "Requirement already satisfied: cffi>=1.12 in /usr/local/lib/python3.10/dist-packages (from cryptography!=3.4.0,>=3.1->jwt<2.0.0,>=1.3.1->guardrails-ai==0.4.3) (1.16.0)\n", + "Requirement already satisfied: wrapt<2,>=1.10 in /usr/local/lib/python3.10/dist-packages (from deprecated>=1.2.6->opentelemetry-exporter-otlp-proto-grpc==1.20.0->guardrails-ai==0.4.3) (1.14.1)\n", + "Requirement already satisfied: filelock in /usr/local/lib/python3.10/dist-packages (from huggingface_hub<1.0,>=0.16.4->tokenizers<0.16.0,>=0.15.2->cohere) (3.13.4)\n", + "Requirement already satisfied: fsspec>=2023.5.0 in /usr/local/lib/python3.10/dist-packages (from huggingface_hub<1.0,>=0.16.4->tokenizers<0.16.0,>=0.15.2->cohere) (2023.6.0)\n", + "Requirement already satisfied: jsonpointer>=1.9 in /usr/local/lib/python3.10/dist-packages (from jsonpatch<2.0,>=1.33->langchain-core<0.2.0,>=0.1.18->guardrails-ai==0.4.3) (2.4)\n", + "Requirement already satisfied: orjson<4.0.0,>=3.9.14 in /usr/local/lib/python3.10/dist-packages (from langsmith<0.2.0,>=0.1.0->langchain-core<0.2.0,>=0.1.18->guardrails-ai==0.4.3) (3.10.0)\n", + "Requirement already satisfied: mdurl~=0.1 in /usr/local/lib/python3.10/dist-packages (from markdown-it-py>=2.2.0->rich<14.0.0,>=13.6.0->guardrails-ai==0.4.3) (0.1.2)\n", + "Requirement already satisfied: pycparser in /usr/local/lib/python3.10/dist-packages (from cffi>=1.12->cryptography!=3.4.0,>=3.1->jwt<2.0.0,>=1.3.1->guardrails-ai==0.4.3) (2.22)\n", + "Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.10/dist-packages (from importlib-metadata<7.0,>=6.0->opentelemetry-api~=1.15->opentelemetry-exporter-otlp-proto-grpc==1.20.0->guardrails-ai==0.4.3) (3.18.1)\n" + ] + } + ], "source": [ - "# TODO: upgrade to \"cohere>5\"\n", - "! pip install \"cohere<5\" guardrails-ai -q" + "! pip install cohere git+https://github.com/guardrails-ai/guardrails.git@main" ] }, { "cell_type": "code", - "execution_count": 3, + "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, - "id": "-6sn0QIqftCE", - "outputId": "055338d2-3695-4cf0-e809-8e63fe542566" + "id": "6RTHgP7Y9YT9", + "outputId": "0ba8c0c5-a987-40e3-d6ad-6479b09296d2" + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[nltk_data] Downloading package punkt to /root/nltk_data...\n", + "[nltk_data] Unzipping tokenizers/punkt.zip.\n", + "\n", + "Installing hub:\u001b[35m/\u001b[0m\u001b[35m/guardrails/\u001b[0m\u001b[95mvalid_range...\u001b[0m\n", + "\n", + "\u001b[2K\u001b[32m[ ==]\u001b[0m Fetching manifest\n", + "\u001b[2K\u001b[32m[ ]\u001b[0m Downloading dependencies\n", + "\u001b[1A\u001b[2K\u001b[?25l\u001b[32m[ ]\u001b[0m Running post-install setup\n", + "\u001b[1A\u001b[2K✅Successfully installed hub:\u001b[35m/\u001b[0m\u001b[35m/guardrails/\u001b[0m\u001b[95mvalid_range\u001b[0m!\n", + "\n", + "\u001b[1mImport validator:\u001b[0m\n", + "from guardrails.hub import ValidRange\n", + "\n", + "\u001b[1mGet more info:\u001b[0m\n", + "\u001b[4;94mhttps://hub.guardrailsai.com/validator/guardrails/valid_range\u001b[0m\n", + "\n", + "\n", + "Installing hub:\u001b[35m/\u001b[0m\u001b[35m/guardrails/\u001b[0m\u001b[95mvalid_choices...\u001b[0m\n", + "\n", + "\u001b[2K\u001b[32m[= ]\u001b[0m Fetching manifest\n", + "\u001b[2K\u001b[32m[====]\u001b[0m Downloading dependencies\n", + "\u001b[1A\u001b[2K\u001b[?25l\u001b[32m[ ]\u001b[0m Running post-install setup\n", + "\u001b[1A\u001b[2K✅Successfully installed hub:\u001b[35m/\u001b[0m\u001b[35m/guardrails/\u001b[0m\u001b[95mvalid_choices\u001b[0m!\n", + "\n", + "\u001b[1mImport validator:\u001b[0m\n", + "from guardrails.hub import ValidChoices\n", + "\n", + "\u001b[1mGet more info:\u001b[0m\n", + "\u001b[4;94mhttps://hub.guardrailsai.com/validator/guardrails/valid_choices\u001b[0m\n", + "\n" + ] + } + ], + "source": [ + "!guardrails hub install hub://guardrails/valid_range\n", + "!guardrails hub install hub://guardrails/valid_choices" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "-6sn0QIqftCE" }, "outputs": [], "source": [ "import cohere\n", "import guardrails as gd\n", - "from guardrails.validators import ValidRange, ValidChoices\n", + "from guardrails.hub import ValidRange, ValidChoices\n", "from pydantic import BaseModel, Field\n", "from rich import print\n", - "from typing import List" + "from typing import List\n", + "\n", + "# Create a Cohere client\n", + "co = cohere.Client(api_key=\"COHERE_API_KEY\")" ] }, { @@ -93,7 +236,7 @@ }, { "cell_type": "code", - "execution_count": 3, + "execution_count": null, "metadata": { "id": "JCXuT3sNftCF" }, @@ -121,7 +264,7 @@ }, { "cell_type": "code", - "execution_count": 4, + "execution_count": null, "metadata": { "id": "WQZn8JQNftCF" }, @@ -158,7 +301,7 @@ }, { "cell_type": "code", - "execution_count": 5, + "execution_count": null, "metadata": { "id": "VnajfL1FftCG" }, @@ -175,16 +318,32 @@ }, { "cell_type": "code", - "execution_count": 6, + "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", - "height": 661 + "height": 839 }, "id": "paqXhvhjftCG", - "outputId": "95b9fa39-3641-43f2-de02-8d4d8959323f" + "outputId": "a4ee28d3-c1eb-4218-9c7a-23f55f2b3e90" }, "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "/usr/local/lib/python3.10/dist-packages/guardrails/validators/__init__.py:50: FutureWarning: \n", + " Importing validators from `guardrails.validators` is deprecated.\n", + " All validators are now available in the Guardrails Hub. Please install\n", + " and import them from the hub instead. All validators will be\n", + " removed from this module in the next major release.\n", + "\n", + " Install with: `guardrails hub install hub:///`\n", + " Import as: from guardrails.hub import `ValidatorName`\n", + " \n", + " warn(\n" + ] + }, { "data": { "text/html": [ @@ -198,12 +357,12 @@ "\n", "<output>\n", " <string name=\"gender\" description=\"Patient's gender\"/>\n", - " <integer name=\"age\" format=\"valid-range: min=0 max=100\" description=\"Patient's age\"/>\n", + " <integer name=\"age\" description=\"Patient's age\" format=\"guardrails/valid_range: min=0 max=100\"/>\n", " <list name=\"symptoms\" description=\"Symptoms that the patient is experiencing\">\n", " <object>\n", " <string name=\"symptom\" description=\"Symptom that a patient is experiencing\"/>\n", - " <string name=\"affected_area\" format=\"valid-choices: choices=['Head', 'Face', 'Neck', 'Chest']\" \n", - "description=\"What part of the body the symptom is affecting\"/>\n", + " <string name=\"affected_area\" description=\"What part of the body the symptom is affecting\" \n", + "format=\"guardrails/valid_choices: choices=['Head', 'Face', 'Neck', 'Chest']\"/>\n", " </object>\n", " </list>\n", " <list name=\"current_meds\" description=\"Medications that the patient is currently taking\">\n", @@ -240,12 +399,12 @@ "\n", "\u001b[1m<\u001b[0m\u001b[1;95moutput\u001b[0m\u001b[39m>\u001b[0m\n", "\u001b[39m \u001b[0m\n", - "\u001b[39m \u001b[0m\n", + "\u001b[39m \u001b[0m\n", "\u001b[39m \u001b[0m\n", "\u001b[39m \u001b[0m\n", "\u001b[39m \u001b[0m\n", - "\u001b[39m \u001b[0m\n", + "\u001b[39m \u001b[0m\n", "\u001b[39m <\u001b[0m\u001b[35m/\u001b[0m\u001b[95mobject\u001b[0m\u001b[39m>\u001b[0m\n", "\u001b[39m <\u001b[0m\u001b[35m/\u001b[0m\u001b[95mlist\u001b[0m\u001b[39m>\u001b[0m\n", "\u001b[39m \u001b[0m\n", @@ -292,44 +451,24 @@ }, { "cell_type": "code", - "execution_count": 7, - "metadata": { - "id": "PtIoV2DVkF-3" - }, - "outputs": [], - "source": [ - "# Create a Cohere client\n", - "co = cohere.Client(api_key='COHERE_API_KEY')" - ] - }, - { - "cell_type": "code", - "execution_count": 21, + "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", - "height": 172 + "height": 116 }, "id": "4gH6LEoTMuER", - "outputId": "76cde338-d979-4230-b777-5b658c5754ff" + "outputId": "fd1161d6-b76d-4313-87fc-3cb61b1d4bb3" }, "outputs": [ - { - "name": "stderr", - "output_type": "stream", - "text": [ - "WARNING:guardrails.validator_service:Async event loop found, but guard was invoked synchronously.For validator parallelization, please call `validate_async` instead.\n", - "WARNING:guardrails.validator_service:Async event loop found, but guard was invoked synchronously.For validator parallelization, please call `validate_async` instead.\n" - ] - }, { "data": { "text/html": [ "
{\n",
               "    'gender': 'Male',\n",
               "    'age': 49,\n",
-              "    'symptoms': [{'symptom': 'Chronic macular rash. Itchy, flaky, slightly scaly', 'affected_area': 'Head'}],\n",
-              "    'current_meds': [{'medication': 'OTC steroid cream', 'response': 'Moderate'}]\n",
+              "    'symptoms': [{'symptom': 'Chronic macular rash, itchy, flaky, slightly scaly', 'affected_area': 'Face'}],\n",
+              "    'current_meds': [{'medication': 'OTC steroid cream', 'response': 'Moderate response'}]\n",
               "}\n",
               "
\n" ], @@ -337,8 +476,8 @@ "\u001b[1m{\u001b[0m\n", " \u001b[32m'gender'\u001b[0m: \u001b[32m'Male'\u001b[0m,\n", " \u001b[32m'age'\u001b[0m: \u001b[1;36m49\u001b[0m,\n", - " \u001b[32m'symptoms'\u001b[0m: \u001b[1m[\u001b[0m\u001b[1m{\u001b[0m\u001b[32m'symptom'\u001b[0m: \u001b[32m'Chronic macular rash. Itchy, flaky, slightly scaly'\u001b[0m, \u001b[32m'affected_area'\u001b[0m: \u001b[32m'Head'\u001b[0m\u001b[1m}\u001b[0m\u001b[1m]\u001b[0m,\n", - " \u001b[32m'current_meds'\u001b[0m: \u001b[1m[\u001b[0m\u001b[1m{\u001b[0m\u001b[32m'medication'\u001b[0m: \u001b[32m'OTC steroid cream'\u001b[0m, \u001b[32m'response'\u001b[0m: \u001b[32m'Moderate'\u001b[0m\u001b[1m}\u001b[0m\u001b[1m]\u001b[0m\n", + " \u001b[32m'symptoms'\u001b[0m: \u001b[1m[\u001b[0m\u001b[1m{\u001b[0m\u001b[32m'symptom'\u001b[0m: \u001b[32m'Chronic macular rash, itchy, flaky, slightly scaly'\u001b[0m, \u001b[32m'affected_area'\u001b[0m: \u001b[32m'Face'\u001b[0m\u001b[1m}\u001b[0m\u001b[1m]\u001b[0m,\n", + " \u001b[32m'current_meds'\u001b[0m: \u001b[1m[\u001b[0m\u001b[1m{\u001b[0m\u001b[32m'medication'\u001b[0m: \u001b[32m'OTC steroid cream'\u001b[0m, \u001b[32m'response'\u001b[0m: \u001b[32m'Moderate response'\u001b[0m\u001b[1m}\u001b[0m\u001b[1m]\u001b[0m\n", "\u001b[1m}\u001b[0m\n" ] }, @@ -348,28 +487,27 @@ ], "source": [ "# Wrap the Cohere API call with the `guard` object\n", - "raw_llm_output, validated_output = guard(\n", - " co.generate,\n", + "response = guard(\n", + " co.chat,\n", " prompt_params={\"doctors_notes\": doctors_notes},\n", - " model='command',\n", - " max_tokens=1024,\n", - " temperature=0.3,\n", + " model='command-r',\n", + " temperature=0,\n", ")\n", "\n", "# Print the validated output from the LLM\n", - "print(validated_output)" + "print(response.validated_output)" ] }, { "cell_type": "code", - "execution_count": 23, + "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 1000 }, "id": "0lM_-rvjNN_k", - "outputId": "d1989b43-3650-4d94-f8ab-a6bfdcc185ba" + "outputId": "5218e56d-2a7c-422a-9e5e-258d40a95036" }, "outputs": [ { @@ -390,12 +528,12 @@ "│ │ │ │ │\n", "│ │ │ <output> │ │\n", "│ │ │ <string name=\"gender\" description=\"Patient's gender\"/> │ │\n", - "│ │ │ <integer name=\"age\" format=\"valid-range: min=0 max=100\" description=\"Patient's age\"/> │ │\n", + "│ │ │ <integer name=\"age\" description=\"Patient's age\" format=\"guardrails/valid_range: min=0 max=100\"/> │ │\n", "│ │ │ <list name=\"symptoms\" description=\"Symptoms that the patient is experiencing\"> │ │\n", "│ │ │ <object> │ │\n", "│ │ │ <string name=\"symptom\" description=\"Symptom that a patient is experiencing\"/> │ │\n", - "│ │ │ <string name=\"affected_area\" format=\"valid-choices: choices=['Head', 'Face', 'Neck', │ │\n", - "│ │ │ 'Chest']\" description=\"What part of the body the symptom is affecting\"/> │ │\n", + "│ │ │ <string name=\"affected_area\" description=\"What part of the body the symptom is affecting\" │ │\n", + "│ │ │ format=\"guardrails/valid_choices: choices=['Head', 'Face', 'Neck', 'Chest']\"/> │ │\n", "│ │ │ </object> │ │\n", "│ │ │ </list> │ │\n", "│ │ │ <list name=\"current_meds\" description=\"Medications that the patient is currently taking\"> │ │\n", @@ -422,25 +560,22 @@ "│ │ │ │ │\n", "│ │ ╰─────────────────────────────────────────────────────────────────────────────────────────────────────────╯ │\n", "│ │ ╭──────────────────────────────────────────── Message History ────────────────────────────────────────────╮ │\n", - "│ │ │ ┏━━━━━━┳━━━━━━━━━┓ │ │\n", - "│ │ │ ┃ Role Content ┃ │ │\n", - "│ │ │ ┡━━━━━━╇━━━━━━━━━┩ │ │\n", - "│ │ │ └──────┴─────────┘ │ │\n", + "│ │ │ No message history. │ │\n", "│ │ ╰─────────────────────────────────────────────────────────────────────────────────────────────────────────╯ │\n", "│ │ ╭──────────────────────────────────────────── Raw LLM Output ─────────────────────────────────────────────╮ │\n", - "│ │ │ { │ │\n", + "│ │ │ { │ │\n", "│ │ │ \"gender\": \"Male\", │ │\n", "│ │ │ \"age\": 49, │ │\n", "│ │ │ \"symptoms\": [ │ │\n", "│ │ │ { │ │\n", - "│ │ │ \"symptom\": \"Chronic macular rash. Itchy, flaky, slightly scaly\", │ │\n", - "│ │ │ \"affected_area\": \"Face, Head, Hair, Beard, Eyebrows, Nares\" │ │\n", + "│ │ │ \"symptom\": \"Chronic macular rash, itchy, flaky, slightly scaly\", │ │\n", + "│ │ │ \"affected_area\": \"Face & Head\" │ │\n", "│ │ │ } │ │\n", "│ │ │ ], │ │\n", "│ │ │ \"current_meds\": [ │ │\n", "│ │ │ { │ │\n", "│ │ │ \"medication\": \"OTC steroid cream\", │ │\n", - "│ │ │ \"response\": \"Moderate\" │ │\n", + "│ │ │ \"response\": \"Moderate response\" │ │\n", "│ │ │ } │ │\n", "│ │ │ ] │ │\n", "│ │ │ } │ │\n", @@ -451,15 +586,15 @@ "│ │ │ 'age': 49, │ │\n", "│ │ │ 'symptoms': [ │ │\n", "│ │ │ { │ │\n", - "│ │ │ 'symptom': 'Chronic macular rash. Itchy, flaky, slightly scaly', │ │\n", + "│ │ │ 'symptom': 'Chronic macular rash, itchy, flaky, slightly scaly', │ │\n", "│ │ │ 'affected_area': FieldReAsk( │ │\n", - "│ │ │ incorrect_value='Face, Head, Hair, Beard, Eyebrows, Nares', │ │\n", + "│ │ │ incorrect_value='Face & Head', │ │\n", "│ │ │ fail_results=[ │ │\n", "│ │ │ FailResult( │ │\n", "│ │ │ outcome='fail', │ │\n", "│ │ │ metadata=None, │ │\n", - "│ │ │ error_message=\"Value Face, Head, Hair, Beard, Eyebrows, Nares is not in choices │ │\n", - "│ │ │ ['Head', 'Face', 'Neck', 'Chest'].\", │ │\n", + "│ │ │ error_message=\"Value Face & Head is not in choices ['Head', 'Face', 'Neck', │ │\n", + "│ │ │ 'Chest'].\", │ │\n", "│ │ │ fix_value=None │ │\n", "│ │ │ ) │ │\n", "│ │ │ ], │ │\n", @@ -468,7 +603,7 @@ "│ │ │ } │ │\n", "│ │ │ ], │ │\n", "│ │ │ 'current_meds': [ │ │\n", - "│ │ │ {'medication': 'OTC steroid cream', 'response': 'Moderate'} │ │\n", + "│ │ │ {'medication': 'OTC steroid cream', 'response': 'Moderate response'} │ │\n", "│ │ │ ] │ │\n", "│ │ │ } │ │\n", "│ │ ╰─────────────────────────────────────────────────────────────────────────────────────────────────────────╯ │\n", @@ -483,12 +618,11 @@ " │ │ \"age\": 49, │ │\n", " │ │ \"symptoms\": [ │ │\n", " │ │ { │ │\n", - " │ │ \"symptom\": \"Chronic macular rash. Itchy, flaky, slightly scaly\", │ │\n", + " │ │ \"symptom\": \"Chronic macular rash, itchy, flaky, slightly scaly\", │ │\n", " │ │ \"affected_area\": { │ │\n", - " │ │ \"incorrect_value\": \"Face, Head, Hair, Beard, Eyebrows, Nares\", │ │\n", + " │ │ \"incorrect_value\": \"Face & Head\", │ │\n", " │ │ \"error_messages\": [ │ │\n", - " │ │ \"Value Face, Head, Hair, Beard, Eyebrows, Nares is not in choices ['Head', 'Face', 'Neck', │ │\n", - " │ │ 'Chest'].\" │ │\n", + " │ │ \"Value Face & Head is not in choices ['Head', 'Face', 'Neck', 'Chest'].\" │ │\n", " │ │ ] │ │\n", " │ │ } │ │\n", " │ │ } │ │\n", @@ -496,7 +630,7 @@ " │ │ \"current_meds\": [ │ │\n", " │ │ { │ │\n", " │ │ \"medication\": \"OTC steroid cream\", │ │\n", - " │ │ \"response\": \"Moderate\" │ │\n", + " │ │ \"response\": \"Moderate response\" │ │\n", " │ │ } │ │\n", " │ │ ] │ │\n", " │ │ } │ │\n", @@ -508,12 +642,12 @@ " │ │ │ │\n", " │ │ <output> │ │\n", " │ │ <string name=\"gender\" description=\"Patient's gender\"/> │ │\n", - " │ │ <integer name=\"age\" format=\"valid-range: min=0 max=100\" description=\"Patient's age\"/> │ │\n", + " │ │ <integer name=\"age\" description=\"Patient's age\" format=\"guardrails/valid_range: min=0 max=100\"/> │ │\n", " │ │ <list name=\"symptoms\" description=\"Symptoms that the patient is experiencing\"> │ │\n", " │ │ <object> │ │\n", " │ │ <string name=\"symptom\" description=\"Symptom that a patient is experiencing\"/> │ │\n", - " │ │ <string name=\"affected_area\" format=\"valid-choices: choices=['Head', 'Face', 'Neck', │ │\n", - " │ │ 'Chest']\" description=\"What part of the body the symptom is affecting\"/> │ │\n", + " │ │ <string name=\"affected_area\" description=\"What part of the body the symptom is affecting\" │ │\n", + " │ │ format=\"guardrails/valid_choices: choices=['Head', 'Face', 'Neck', 'Chest']\"/> │ │\n", " │ │ </object> │ │\n", " │ │ </list> │ │\n", " │ │ <list name=\"current_meds\" description=\"Medications that the patient is currently taking\"> │ │\n", @@ -532,42 +666,23 @@ " │ │ enter `null`. │ │\n", " │ │ │ │\n", " │ ╰─────────────────────────────────────────────────────────────────────────────────────────────────────────╯ │\n", - " │ ╭───────────────────────────────────────────── Instructions ──────────────────────────────────────────────╮ │\n", - " │ │ │ │\n", - " │ │ You are a helpful assistant only capable of communicating with valid JSON, and no other text. │ │\n", - " │ │ │ │\n", - " │ │ ONLY return a valid JSON object (no other text is necessary), where the key of the field in JSON is the │ │\n", - " │ │ `name` attribute of the corresponding XML, and the value is of the type specified by the corresponding │ │\n", - " │ │ XML's tag. The JSON MUST conform to the XML format, including any types and format requests e.g. │ │\n", - " │ │ requests for lists, objects and specific types. Be correct and concise. If you are unsure anywhere, │ │\n", - " │ │ enter `null`. │ │\n", - " │ │ │ │\n", - " │ │ Here are examples of simple (XML, JSON) pairs that show the expected behavior: │ │\n", - " │ │ - `<string name='foo' format='two-words lower-case' />` => `{'foo': 'example one'}` │ │\n", - " │ │ - `<list name='bar'><string format='upper-case' /></list>` => `{\"bar\": ['STRING ONE', 'STRING TWO', │ │\n", - " │ │ etc.]}` │ │\n", - " │ │ - `<object name='baz'><string name=\"foo\" format=\"capitalize two-words\" /><integer name=\"index\" │ │\n", - " │ │ format=\"1-indexed\" /></object>` => `{'baz': {'foo': 'Some String', 'index': 1}}` │ │\n", - " │ │ │ │\n", - " │ ╰─────────────────────────────────────────────────────────────────────────────────────────────────────────╯ │\n", " │ ╭──────────────────────────────────────────── Message History ────────────────────────────────────────────╮ │\n", " │ │ No message history. │ │\n", " │ ╰─────────────────────────────────────────────────────────────────────────────────────────────────────────╯ │\n", " │ ╭──────────────────────────────────────────── Raw LLM Output ─────────────────────────────────────────────╮ │\n", - " │ │ { │ │\n", + " │ │ { │ │\n", " │ │ \"gender\": \"Male\", │ │\n", " │ │ \"age\": 49, │ │\n", " │ │ \"symptoms\": [ │ │\n", " │ │ { │ │\n", - " │ │ \"symptom\": \"Chronic macular rash. Itchy, flaky, slightly scaly\", │ │\n", - " │ │ \"affected_area\": \"Head\", │ │\n", - " │ │ \"error_messages\": [] │ │\n", + " │ │ \"symptom\": \"Chronic macular rash, itchy, flaky, slightly scaly\", │ │\n", + " │ │ \"affected_area\": \"Face\" │ │\n", " │ │ } │ │\n", " │ │ ], │ │\n", " │ │ \"current_meds\": [ │ │\n", " │ │ { │ │\n", " │ │ \"medication\": \"OTC steroid cream\", │ │\n", - " │ │ \"response\": \"Moderate\" │ │\n", + " │ │ \"response\": \"Moderate response\" │ │\n", " │ │ } │ │\n", " │ │ ] │ │\n", " │ │ } │ │\n", @@ -578,12 +693,12 @@ " │ │ 'age': 49, │ │\n", " │ │ 'symptoms': [ │ │\n", " │ │ { │ │\n", - " │ │ 'symptom': 'Chronic macular rash. Itchy, flaky, slightly scaly', │ │\n", - " │ │ 'affected_area': 'Head' │ │\n", + " │ │ 'symptom': 'Chronic macular rash, itchy, flaky, slightly scaly', │ │\n", + " │ │ 'affected_area': 'Face' │ │\n", " │ │ } │ │\n", " │ │ ], │ │\n", " │ │ 'current_meds': [ │ │\n", - " │ │ {'medication': 'OTC steroid cream', 'response': 'Moderate'} │ │\n", + " │ │ {'medication': 'OTC steroid cream', 'response': 'Moderate response'} │ │\n", " │ │ ] │ │\n", " │ │ } │ │\n", " │ ╰─────────────────────────────────────────────────────────────────────────────────────────────────────────╯ │\n", @@ -606,12 +721,12 @@ "│ │ \u001b[48;2;240;248;255m│\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m│\u001b[0m │\n", "│ │ \u001b[48;2;240;248;255m│\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m│\u001b[0m │\n", "│ │ \u001b[48;2;240;248;255m│\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m│\u001b[0m │\n", - "│ │ \u001b[48;2;240;248;255m│\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m│\u001b[0m │\n", + "│ │ \u001b[48;2;240;248;255m│\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m│\u001b[0m │\n", "│ │ \u001b[48;2;240;248;255m│\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m│\u001b[0m │\n", "│ │ \u001b[48;2;240;248;255m│\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m│\u001b[0m │\n", "│ │ \u001b[48;2;240;248;255m│\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m│\u001b[0m │\n", - "│ │ \u001b[48;2;240;248;255m│\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m│\u001b[0m │\n", + "│ │ \u001b[48;2;240;248;255m│\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m│\u001b[0m │\n", "│ │ \u001b[48;2;240;248;255m│\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m│\u001b[0m │\n", "│ │ \u001b[48;2;240;248;255m│\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m│\u001b[0m │\n", "│ │ \u001b[48;2;240;248;255m│\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m│\u001b[0m │\n", @@ -638,25 +753,22 @@ "│ │ \u001b[48;2;240;248;255m│\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m│\u001b[0m │\n", "│ │ \u001b[48;2;240;248;255m╰─────────────────────────────────────────────────────────────────────────────────────────────────────────╯\u001b[0m │\n", "│ │ \u001b[48;2;231;223;235m╭─\u001b[0m\u001b[48;2;231;223;235m───────────────────────────────────────────\u001b[0m\u001b[48;2;231;223;235m Message History \u001b[0m\u001b[48;2;231;223;235m───────────────────────────────────────────\u001b[0m\u001b[48;2;231;223;235m─╮\u001b[0m │\n", - "│ │ \u001b[48;2;231;223;235m│\u001b[0m\u001b[48;2;231;223;235m \u001b[0m\u001b[48;2;231;223;235m┏━━━━━━┳━━━━━━━━━┓\u001b[0m\u001b[48;2;231;223;235m \u001b[0m\u001b[48;2;231;223;235m \u001b[0m\u001b[48;2;231;223;235m│\u001b[0m │\n", - "│ │ \u001b[48;2;231;223;235m│\u001b[0m\u001b[48;2;231;223;235m \u001b[0m\u001b[48;2;231;223;235m┃\u001b[0m\u001b[1;48;2;231;223;235m \u001b[0m\u001b[1;48;2;231;223;235mRole\u001b[0m\u001b[1;48;2;231;223;235m \u001b[0m\u001b[48;2;231;223;235m┃\u001b[0m\u001b[1;48;2;231;223;235m \u001b[0m\u001b[1;48;2;231;223;235mContent\u001b[0m\u001b[1;48;2;231;223;235m \u001b[0m\u001b[48;2;231;223;235m┃\u001b[0m\u001b[48;2;231;223;235m \u001b[0m\u001b[48;2;231;223;235m \u001b[0m\u001b[48;2;231;223;235m│\u001b[0m │\n", - "│ │ \u001b[48;2;231;223;235m│\u001b[0m\u001b[48;2;231;223;235m \u001b[0m\u001b[48;2;231;223;235m┡━━━━━━╇━━━━━━━━━┩\u001b[0m\u001b[48;2;231;223;235m \u001b[0m\u001b[48;2;231;223;235m \u001b[0m\u001b[48;2;231;223;235m│\u001b[0m │\n", - "│ │ \u001b[48;2;231;223;235m│\u001b[0m\u001b[48;2;231;223;235m \u001b[0m\u001b[48;2;231;223;235m└──────┴─────────┘\u001b[0m\u001b[48;2;231;223;235m \u001b[0m\u001b[48;2;231;223;235m \u001b[0m\u001b[48;2;231;223;235m│\u001b[0m │\n", + "│ │ \u001b[48;2;231;223;235m│\u001b[0m\u001b[48;2;231;223;235m \u001b[0m\u001b[48;2;231;223;235mNo message history.\u001b[0m\u001b[48;2;231;223;235m \u001b[0m\u001b[48;2;231;223;235m \u001b[0m\u001b[48;2;231;223;235m│\u001b[0m │\n", "│ │ \u001b[48;2;231;223;235m╰─────────────────────────────────────────────────────────────────────────────────────────────────────────╯\u001b[0m │\n", "│ │ \u001b[48;2;245;245;220m╭─\u001b[0m\u001b[48;2;245;245;220m───────────────────────────────────────────\u001b[0m\u001b[48;2;245;245;220m Raw LLM Output \u001b[0m\u001b[48;2;245;245;220m────────────────────────────────────────────\u001b[0m\u001b[48;2;245;245;220m─╮\u001b[0m │\n", - "│ │ \u001b[48;2;245;245;220m│\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m {\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m│\u001b[0m │\n", + "│ │ \u001b[48;2;245;245;220m│\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m{\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m│\u001b[0m │\n", "│ │ \u001b[48;2;245;245;220m│\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m \"gender\": \"Male\",\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m│\u001b[0m │\n", "│ │ \u001b[48;2;245;245;220m│\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m \"age\": 49,\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m│\u001b[0m │\n", "│ │ \u001b[48;2;245;245;220m│\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m \"symptoms\": [\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m│\u001b[0m │\n", "│ │ \u001b[48;2;245;245;220m│\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m {\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m│\u001b[0m │\n", - "│ │ \u001b[48;2;245;245;220m│\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m \"symptom\": \"Chronic macular rash. Itchy, flaky, slightly scaly\",\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m│\u001b[0m │\n", - "│ │ \u001b[48;2;245;245;220m│\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m \"affected_area\": \"Face, Head, Hair, Beard, Eyebrows, Nares\"\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m│\u001b[0m │\n", + "│ │ \u001b[48;2;245;245;220m│\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m \"symptom\": \"Chronic macular rash, itchy, flaky, slightly scaly\",\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m│\u001b[0m │\n", + "│ │ \u001b[48;2;245;245;220m│\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m \"affected_area\": \"Face & Head\"\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m│\u001b[0m │\n", "│ │ \u001b[48;2;245;245;220m│\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m }\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m│\u001b[0m │\n", "│ │ \u001b[48;2;245;245;220m│\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m ],\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m│\u001b[0m │\n", "│ │ \u001b[48;2;245;245;220m│\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m \"current_meds\": [\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m│\u001b[0m │\n", "│ │ \u001b[48;2;245;245;220m│\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m {\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m│\u001b[0m │\n", "│ │ \u001b[48;2;245;245;220m│\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m \"medication\": \"OTC steroid cream\",\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m│\u001b[0m │\n", - "│ │ \u001b[48;2;245;245;220m│\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m \"response\": \"Moderate\"\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m│\u001b[0m │\n", + "│ │ \u001b[48;2;245;245;220m│\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m \"response\": \"Moderate response\"\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m│\u001b[0m │\n", "│ │ \u001b[48;2;245;245;220m│\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m }\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m│\u001b[0m │\n", "│ │ \u001b[48;2;245;245;220m│\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m ]\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m│\u001b[0m │\n", "│ │ \u001b[48;2;245;245;220m│\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m}\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m│\u001b[0m │\n", @@ -667,15 +779,15 @@ "│ │ \u001b[48;2;240;255;240m│\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m 'age': 49,\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m│\u001b[0m │\n", "│ │ \u001b[48;2;240;255;240m│\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m 'symptoms': [\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m│\u001b[0m │\n", "│ │ \u001b[48;2;240;255;240m│\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m {\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m│\u001b[0m │\n", - "│ │ \u001b[48;2;240;255;240m│\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m 'symptom': 'Chronic macular rash. Itchy, flaky, slightly scaly',\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m│\u001b[0m │\n", + "│ │ \u001b[48;2;240;255;240m│\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m 'symptom': 'Chronic macular rash, itchy, flaky, slightly scaly',\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m│\u001b[0m │\n", "│ │ \u001b[48;2;240;255;240m│\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m 'affected_area': FieldReAsk(\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m│\u001b[0m │\n", - "│ │ \u001b[48;2;240;255;240m│\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m incorrect_value='Face, Head, Hair, Beard, Eyebrows, Nares',\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m│\u001b[0m │\n", + "│ │ \u001b[48;2;240;255;240m│\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m incorrect_value='Face & Head',\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m│\u001b[0m │\n", "│ │ \u001b[48;2;240;255;240m│\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m fail_results=[\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m│\u001b[0m │\n", "│ │ \u001b[48;2;240;255;240m│\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m FailResult(\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m│\u001b[0m │\n", "│ │ \u001b[48;2;240;255;240m│\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m outcome='fail',\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m│\u001b[0m │\n", "│ │ \u001b[48;2;240;255;240m│\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m metadata=None,\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m│\u001b[0m │\n", - "│ │ \u001b[48;2;240;255;240m│\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m error_message=\"Value Face, Head, Hair, Beard, Eyebrows, Nares is not in choices\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m│\u001b[0m │\n", - "│ │ \u001b[48;2;240;255;240m│\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m['Head', 'Face', 'Neck', 'Chest'].\",\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m│\u001b[0m │\n", + "│ │ \u001b[48;2;240;255;240m│\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m error_message=\"Value Face & Head is not in choices ['Head', 'Face', 'Neck', \u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m│\u001b[0m │\n", + "│ │ \u001b[48;2;240;255;240m│\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m'Chest'].\",\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m│\u001b[0m │\n", "│ │ \u001b[48;2;240;255;240m│\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m fix_value=None\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m│\u001b[0m │\n", "│ │ \u001b[48;2;240;255;240m│\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m )\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m│\u001b[0m │\n", "│ │ \u001b[48;2;240;255;240m│\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m ],\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m│\u001b[0m │\n", @@ -684,7 +796,7 @@ "│ │ \u001b[48;2;240;255;240m│\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m }\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m│\u001b[0m │\n", "│ │ \u001b[48;2;240;255;240m│\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m ],\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m│\u001b[0m │\n", "│ │ \u001b[48;2;240;255;240m│\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m 'current_meds': [\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m│\u001b[0m │\n", - "│ │ \u001b[48;2;240;255;240m│\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m {'medication': 'OTC steroid cream', 'response': 'Moderate'}\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m│\u001b[0m │\n", + "│ │ \u001b[48;2;240;255;240m│\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m {'medication': 'OTC steroid cream', 'response': 'Moderate response'}\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m│\u001b[0m │\n", "│ │ \u001b[48;2;240;255;240m│\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m ]\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m│\u001b[0m │\n", "│ │ \u001b[48;2;240;255;240m│\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m}\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m│\u001b[0m │\n", "│ │ \u001b[48;2;240;255;240m╰─────────────────────────────────────────────────────────────────────────────────────────────────────────╯\u001b[0m │\n", @@ -699,12 +811,11 @@ " │ \u001b[48;2;240;248;255m│\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \"age\": 49,\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m│\u001b[0m │\n", " │ \u001b[48;2;240;248;255m│\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \"symptoms\": [\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m│\u001b[0m │\n", " │ \u001b[48;2;240;248;255m│\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m {\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m│\u001b[0m │\n", - " │ \u001b[48;2;240;248;255m│\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \"symptom\": \"Chronic macular rash. Itchy, flaky, slightly scaly\",\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m│\u001b[0m │\n", + " │ \u001b[48;2;240;248;255m│\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \"symptom\": \"Chronic macular rash, itchy, flaky, slightly scaly\",\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m│\u001b[0m │\n", " │ \u001b[48;2;240;248;255m│\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \"affected_area\": {\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m│\u001b[0m │\n", - " │ \u001b[48;2;240;248;255m│\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \"incorrect_value\": \"Face, Head, Hair, Beard, Eyebrows, Nares\",\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m│\u001b[0m │\n", + " │ \u001b[48;2;240;248;255m│\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \"incorrect_value\": \"Face & Head\",\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m│\u001b[0m │\n", " │ \u001b[48;2;240;248;255m│\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \"error_messages\": [\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m│\u001b[0m │\n", - " │ \u001b[48;2;240;248;255m│\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \"Value Face, Head, Hair, Beard, Eyebrows, Nares is not in choices ['Head', 'Face', 'Neck', \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m│\u001b[0m │\n", - " │ \u001b[48;2;240;248;255m│\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m'Chest'].\"\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m│\u001b[0m │\n", + " │ \u001b[48;2;240;248;255m│\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \"Value Face & Head is not in choices ['Head', 'Face', 'Neck', 'Chest'].\"\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m│\u001b[0m │\n", " │ \u001b[48;2;240;248;255m│\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m ]\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m│\u001b[0m │\n", " │ \u001b[48;2;240;248;255m│\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m }\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m│\u001b[0m │\n", " │ \u001b[48;2;240;248;255m│\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m }\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m│\u001b[0m │\n", @@ -712,7 +823,7 @@ " │ \u001b[48;2;240;248;255m│\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \"current_meds\": [\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m│\u001b[0m │\n", " │ \u001b[48;2;240;248;255m│\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m {\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m│\u001b[0m │\n", " │ \u001b[48;2;240;248;255m│\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \"medication\": \"OTC steroid cream\",\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m│\u001b[0m │\n", - " │ \u001b[48;2;240;248;255m│\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \"response\": \"Moderate\"\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m│\u001b[0m │\n", + " │ \u001b[48;2;240;248;255m│\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \"response\": \"Moderate response\"\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m│\u001b[0m │\n", " │ \u001b[48;2;240;248;255m│\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m }\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m│\u001b[0m │\n", " │ \u001b[48;2;240;248;255m│\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m ]\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m│\u001b[0m │\n", " │ \u001b[48;2;240;248;255m│\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m}\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m│\u001b[0m │\n", @@ -724,12 +835,12 @@ " │ \u001b[48;2;240;248;255m│\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m│\u001b[0m │\n", " │ \u001b[48;2;240;248;255m│\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m│\u001b[0m │\n", " │ \u001b[48;2;240;248;255m│\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m│\u001b[0m │\n", - " │ \u001b[48;2;240;248;255m│\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m│\u001b[0m │\n", + " │ \u001b[48;2;240;248;255m│\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m│\u001b[0m │\n", " │ \u001b[48;2;240;248;255m│\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m│\u001b[0m │\n", " │ \u001b[48;2;240;248;255m│\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m│\u001b[0m │\n", " │ \u001b[48;2;240;248;255m│\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m│\u001b[0m │\n", - " │ \u001b[48;2;240;248;255m│\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m│\u001b[0m │\n", + " │ \u001b[48;2;240;248;255m│\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m│\u001b[0m │\n", " │ \u001b[48;2;240;248;255m│\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m│\u001b[0m │\n", " │ \u001b[48;2;240;248;255m│\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m│\u001b[0m │\n", " │ \u001b[48;2;240;248;255m│\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m│\u001b[0m │\n", @@ -748,42 +859,23 @@ " │ \u001b[48;2;240;248;255m│\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255menter `null`.\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m│\u001b[0m │\n", " │ \u001b[48;2;240;248;255m│\u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m \u001b[0m\u001b[48;2;240;248;255m│\u001b[0m │\n", " │ \u001b[48;2;240;248;255m╰─────────────────────────────────────────────────────────────────────────────────────────────────────────╯\u001b[0m │\n", - " │ \u001b[48;2;255;240;242m╭─\u001b[0m\u001b[48;2;255;240;242m────────────────────────────────────────────\u001b[0m\u001b[48;2;255;240;242m Instructions \u001b[0m\u001b[48;2;255;240;242m─────────────────────────────────────────────\u001b[0m\u001b[48;2;255;240;242m─╮\u001b[0m │\n", - " │ \u001b[48;2;255;240;242m│\u001b[0m\u001b[48;2;255;240;242m \u001b[0m\u001b[48;2;255;240;242m \u001b[0m\u001b[48;2;255;240;242m \u001b[0m\u001b[48;2;255;240;242m│\u001b[0m │\n", - " │ \u001b[48;2;255;240;242m│\u001b[0m\u001b[48;2;255;240;242m \u001b[0m\u001b[48;2;255;240;242mYou are a helpful assistant only capable of communicating with valid JSON, and no other text.\u001b[0m\u001b[48;2;255;240;242m \u001b[0m\u001b[48;2;255;240;242m \u001b[0m\u001b[48;2;255;240;242m│\u001b[0m │\n", - " │ \u001b[48;2;255;240;242m│\u001b[0m\u001b[48;2;255;240;242m \u001b[0m\u001b[48;2;255;240;242m \u001b[0m\u001b[48;2;255;240;242m \u001b[0m\u001b[48;2;255;240;242m│\u001b[0m │\n", - " │ \u001b[48;2;255;240;242m│\u001b[0m\u001b[48;2;255;240;242m \u001b[0m\u001b[48;2;255;240;242mONLY return a valid JSON object (no other text is necessary), where the key of the field in JSON is the\u001b[0m\u001b[48;2;255;240;242m \u001b[0m\u001b[48;2;255;240;242m│\u001b[0m │\n", - " │ \u001b[48;2;255;240;242m│\u001b[0m\u001b[48;2;255;240;242m \u001b[0m\u001b[48;2;255;240;242m`name` attribute of the corresponding XML, and the value is of the type specified by the corresponding \u001b[0m\u001b[48;2;255;240;242m \u001b[0m\u001b[48;2;255;240;242m│\u001b[0m │\n", - " │ \u001b[48;2;255;240;242m│\u001b[0m\u001b[48;2;255;240;242m \u001b[0m\u001b[48;2;255;240;242mXML's tag. The JSON MUST conform to the XML format, including any types and format requests e.g. \u001b[0m\u001b[48;2;255;240;242m \u001b[0m\u001b[48;2;255;240;242m \u001b[0m\u001b[48;2;255;240;242m│\u001b[0m │\n", - " │ \u001b[48;2;255;240;242m│\u001b[0m\u001b[48;2;255;240;242m \u001b[0m\u001b[48;2;255;240;242mrequests for lists, objects and specific types. Be correct and concise. If you are unsure anywhere, \u001b[0m\u001b[48;2;255;240;242m \u001b[0m\u001b[48;2;255;240;242m \u001b[0m\u001b[48;2;255;240;242m│\u001b[0m │\n", - " │ \u001b[48;2;255;240;242m│\u001b[0m\u001b[48;2;255;240;242m \u001b[0m\u001b[48;2;255;240;242menter `null`.\u001b[0m\u001b[48;2;255;240;242m \u001b[0m\u001b[48;2;255;240;242m \u001b[0m\u001b[48;2;255;240;242m│\u001b[0m │\n", - " │ \u001b[48;2;255;240;242m│\u001b[0m\u001b[48;2;255;240;242m \u001b[0m\u001b[48;2;255;240;242m \u001b[0m\u001b[48;2;255;240;242m \u001b[0m\u001b[48;2;255;240;242m│\u001b[0m │\n", - " │ \u001b[48;2;255;240;242m│\u001b[0m\u001b[48;2;255;240;242m \u001b[0m\u001b[48;2;255;240;242mHere are examples of simple (XML, JSON) pairs that show the expected behavior:\u001b[0m\u001b[48;2;255;240;242m \u001b[0m\u001b[48;2;255;240;242m \u001b[0m\u001b[48;2;255;240;242m│\u001b[0m │\n", - " │ \u001b[48;2;255;240;242m│\u001b[0m\u001b[48;2;255;240;242m \u001b[0m\u001b[48;2;255;240;242m- `` => `{'foo': 'example one'}`\u001b[0m\u001b[48;2;255;240;242m \u001b[0m\u001b[48;2;255;240;242m \u001b[0m\u001b[48;2;255;240;242m│\u001b[0m │\n", - " │ \u001b[48;2;255;240;242m│\u001b[0m\u001b[48;2;255;240;242m \u001b[0m\u001b[48;2;255;240;242m- `` => `{\"bar\": ['STRING ONE', 'STRING TWO', \u001b[0m\u001b[48;2;255;240;242m \u001b[0m\u001b[48;2;255;240;242m \u001b[0m\u001b[48;2;255;240;242m│\u001b[0m │\n", - " │ \u001b[48;2;255;240;242m│\u001b[0m\u001b[48;2;255;240;242m \u001b[0m\u001b[48;2;255;240;242metc.]}`\u001b[0m\u001b[48;2;255;240;242m \u001b[0m\u001b[48;2;255;240;242m \u001b[0m\u001b[48;2;255;240;242m│\u001b[0m │\n", - " │ \u001b[48;2;255;240;242m│\u001b[0m\u001b[48;2;255;240;242m \u001b[0m\u001b[48;2;255;240;242m- `` => `{'baz': {'foo': 'Some String', 'index': 1}}`\u001b[0m\u001b[48;2;255;240;242m \u001b[0m\u001b[48;2;255;240;242m \u001b[0m\u001b[48;2;255;240;242m│\u001b[0m │\n", - " │ \u001b[48;2;255;240;242m│\u001b[0m\u001b[48;2;255;240;242m \u001b[0m\u001b[48;2;255;240;242m \u001b[0m\u001b[48;2;255;240;242m \u001b[0m\u001b[48;2;255;240;242m│\u001b[0m │\n", - " │ \u001b[48;2;255;240;242m╰─────────────────────────────────────────────────────────────────────────────────────────────────────────╯\u001b[0m │\n", " │ \u001b[48;2;231;223;235m╭─\u001b[0m\u001b[48;2;231;223;235m───────────────────────────────────────────\u001b[0m\u001b[48;2;231;223;235m Message History \u001b[0m\u001b[48;2;231;223;235m───────────────────────────────────────────\u001b[0m\u001b[48;2;231;223;235m─╮\u001b[0m │\n", " │ \u001b[48;2;231;223;235m│\u001b[0m\u001b[48;2;231;223;235m \u001b[0m\u001b[48;2;231;223;235mNo message history.\u001b[0m\u001b[48;2;231;223;235m \u001b[0m\u001b[48;2;231;223;235m \u001b[0m\u001b[48;2;231;223;235m│\u001b[0m │\n", " │ \u001b[48;2;231;223;235m╰─────────────────────────────────────────────────────────────────────────────────────────────────────────╯\u001b[0m │\n", " │ \u001b[48;2;245;245;220m╭─\u001b[0m\u001b[48;2;245;245;220m───────────────────────────────────────────\u001b[0m\u001b[48;2;245;245;220m Raw LLM Output \u001b[0m\u001b[48;2;245;245;220m────────────────────────────────────────────\u001b[0m\u001b[48;2;245;245;220m─╮\u001b[0m │\n", - " │ \u001b[48;2;245;245;220m│\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m {\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m│\u001b[0m │\n", + " │ \u001b[48;2;245;245;220m│\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m{\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m│\u001b[0m │\n", " │ \u001b[48;2;245;245;220m│\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m \"gender\": \"Male\",\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m│\u001b[0m │\n", " │ \u001b[48;2;245;245;220m│\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m \"age\": 49,\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m│\u001b[0m │\n", " │ \u001b[48;2;245;245;220m│\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m \"symptoms\": [\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m│\u001b[0m │\n", " │ \u001b[48;2;245;245;220m│\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m {\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m│\u001b[0m │\n", - " │ \u001b[48;2;245;245;220m│\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m \"symptom\": \"Chronic macular rash. Itchy, flaky, slightly scaly\",\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m│\u001b[0m │\n", - " │ \u001b[48;2;245;245;220m│\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m \"affected_area\": \"Head\",\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m│\u001b[0m │\n", - " │ \u001b[48;2;245;245;220m│\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m \"error_messages\": []\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m│\u001b[0m │\n", + " │ \u001b[48;2;245;245;220m│\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m \"symptom\": \"Chronic macular rash, itchy, flaky, slightly scaly\",\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m│\u001b[0m │\n", + " │ \u001b[48;2;245;245;220m│\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m \"affected_area\": \"Face\"\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m│\u001b[0m │\n", " │ \u001b[48;2;245;245;220m│\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m }\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m│\u001b[0m │\n", " │ \u001b[48;2;245;245;220m│\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m ],\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m│\u001b[0m │\n", " │ \u001b[48;2;245;245;220m│\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m \"current_meds\": [\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m│\u001b[0m │\n", " │ \u001b[48;2;245;245;220m│\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m {\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m│\u001b[0m │\n", " │ \u001b[48;2;245;245;220m│\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m \"medication\": \"OTC steroid cream\",\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m│\u001b[0m │\n", - " │ \u001b[48;2;245;245;220m│\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m \"response\": \"Moderate\"\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m│\u001b[0m │\n", + " │ \u001b[48;2;245;245;220m│\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m \"response\": \"Moderate response\"\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m│\u001b[0m │\n", " │ \u001b[48;2;245;245;220m│\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m }\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m│\u001b[0m │\n", " │ \u001b[48;2;245;245;220m│\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m ]\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m│\u001b[0m │\n", " │ \u001b[48;2;245;245;220m│\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m}\u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m \u001b[0m\u001b[48;2;245;245;220m│\u001b[0m │\n", @@ -794,32 +886,32 @@ " │ \u001b[48;2;240;255;240m│\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m 'age': 49,\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m│\u001b[0m │\n", " │ \u001b[48;2;240;255;240m│\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m 'symptoms': [\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m│\u001b[0m │\n", " │ \u001b[48;2;240;255;240m│\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m {\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m│\u001b[0m │\n", - " │ \u001b[48;2;240;255;240m│\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m 'symptom': 'Chronic macular rash. Itchy, flaky, slightly scaly',\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m│\u001b[0m │\n", - " │ \u001b[48;2;240;255;240m│\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m 'affected_area': 'Head'\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m│\u001b[0m │\n", + " │ \u001b[48;2;240;255;240m│\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m 'symptom': 'Chronic macular rash, itchy, flaky, slightly scaly',\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m│\u001b[0m │\n", + " │ \u001b[48;2;240;255;240m│\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m 'affected_area': 'Face'\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m│\u001b[0m │\n", " │ \u001b[48;2;240;255;240m│\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m }\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m│\u001b[0m │\n", " │ \u001b[48;2;240;255;240m│\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m ],\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m│\u001b[0m │\n", " │ \u001b[48;2;240;255;240m│\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m 'current_meds': [\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m│\u001b[0m │\n", - " │ \u001b[48;2;240;255;240m│\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m {'medication': 'OTC steroid cream', 'response': 'Moderate'}\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m│\u001b[0m │\n", + " │ \u001b[48;2;240;255;240m│\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m {'medication': 'OTC steroid cream', 'response': 'Moderate response'}\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m│\u001b[0m │\n", " │ \u001b[48;2;240;255;240m│\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m ]\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m│\u001b[0m │\n", " │ \u001b[48;2;240;255;240m│\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m}\u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m \u001b[0m\u001b[48;2;240;255;240m│\u001b[0m │\n", " │ \u001b[48;2;240;255;240m╰─────────────────────────────────────────────────────────────────────────────────────────────────────────╯\u001b[0m │\n", " ╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────╯\n" ] }, - "execution_count": 23, + "execution_count": 37, "metadata": {}, "output_type": "execute_result" } ], "source": [ - "guard.guard_state.most_recent_call.tree" + "guard.history.last.tree" ] }, { "cell_type": "code", "execution_count": null, "metadata": { - "id": "BdtU-Q5tlXUs" + "id": "mk5r8xYSaXaC" }, "outputs": [], "source": []