Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[AIConfig Cookbook] Updated Readme for Basic Prompt Routing Demo #179

Merged
merged 3 commits into from
Nov 12, 2023
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 11 additions & 7 deletions aiconfig-docs/docs/introduction/getting-started.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,9 +8,7 @@ import constants from '@site/core/tabConstants';

# Getting Started

:::tip
Please read [AIConfig Basics](/docs/introduction/basics) to understand the motivation behind storing prompts and model parameters as configs.
:::

## Installation

Expand Down Expand Up @@ -62,11 +60,13 @@ Make sure to specify the API keys (such as `OPENAI_API_KEY`) in your environment

## Quickstart

In this quickstart, you will create a customizable NYC travel itinerary using `aiconfig`.
In this quickstart, you will create a customizable NYC travel itinerary using `aiconfig`. We will start with a pre-built AIConfig that we generated from this [AI Workbook](https://lastmileai.dev/workbooks/clooqs3p200kkpe53u6n2rhr9).

### 1. Download the AIConfig.

### 1. Download the AIConfig - `travel.aiconfig.json`.
This AIConfig `travel.aiconfig.json` contains a prompt chain to get a list of travel activities from an LLM and then customize the activities based on user preferences (defined as parameters of the prompt). It also contains the specific models and model parameters for the LLMs.

This AIConfig contains a prompt chain to get a list of travel activities from an LLM and then customize the activities based on user preferences (defined as parameters of the prompt). It also contains the specific models and model parameters for the LLMs.
Download AIConfig [here](https://github.com/lastmile-ai/aiconfig/blob/main/cookbook/Getting-Started/travel.aiconfig.json).

<details>
<summary>`travel.aiconfig.json`</summary>
Expand Down Expand Up @@ -147,7 +147,7 @@ from aiconfig import AIConfigRuntime, InferenceOptions
config = AIConfigRuntime.load('travel.aiconfig.json')

# Run a single prompt
await config.run("get_activities", params=None)
await config.run("get_activities")
```

</TabItem>
Expand Down Expand Up @@ -195,7 +195,7 @@ config = AIConfigRuntime.load('travel.aiconfig.json')

# Run a single prompt (with streaming)
inference_options = InferenceOptions(stream=True)
await config.run("get_activities", params=None, options=inference_options)
await config.run("get_activities", options=inference_options)
```

</TabItem>
Expand Down Expand Up @@ -326,3 +326,7 @@ We are working on a local editor that you can run yourself. For now, please use
```

```

### Code for Getting Started

Python and typescript implementation [here](https://github.com/lastmile-ai/aiconfig/tree/ad38040ec3d9f0273e006464e01e02b06f2809e9/cookbook/Getting-Started).
15 changes: 11 additions & 4 deletions cookbook/Basic-Prompt-Routing/README.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,19 @@
# Basic Prompting Routing Demo - AI Teaching Assistant
# Basic Prompting Routing - AI Teaching Assistant

This demo shows how a simple use case of prompt routing with AI config.
This demo shows how a simple use case of prompt routing with AIConfig. We used streamlit to host the app so you can interact with the assistant!
tanya-rai marked this conversation as resolved.
Show resolved Hide resolved

![image](https://github-production-user-asset-6210df.s3.amazonaws.com/81494782/282203473-f779feb6-339d-4331-9aed-1761b845100a.png)

### How does it work?

The user asks a question. The LLM decides the topic as math, physics, or general. Based on the topic, the LLM selects a different "assistant" to respond. These assistants have different system prompts and respond with varying introductions and style of response.

### Setup with AIConfig

1. Create an AIConfig for the prompts, models, and model parameters to be used for the different assistants: `create_config.py`.
2. Build assistant app to handle prompt routing logic among the prompts (uses AIConfig): `assistant_app.py`.
- `create_config.py` - create an AIConfig for the prompts, models, and model parameters to be used for the different assistants.
- `assistant_aiconfig.json` - generated automatically from running create_config.py
- `assistant_app.py` - build app to handle prompt routing logic among the prompts (uses AIConfig). Uses streamlit to create frontend.

### Run with Streamlit

`streamlit run assistant_app.py`