Skip to content

Commit

Permalink
Engines (#120)
Browse files Browse the repository at this point in the history
* new engines with litellm
  • Loading branch information
vinid authored Oct 6, 2024
1 parent 495801f commit 5061dbe
Show file tree
Hide file tree
Showing 12 changed files with 975 additions and 5 deletions.
50 changes: 47 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,51 @@ This API is similar to the Pytorch API, making it simple to adapt to your usecas

![Analogy with Torch](assets/analogy.png)


### Updates:

**29th Sept 2024**:

We are introducing a new engine based on [litellm](https://github.com/BerriAI/litellm). This should allow
you to use any model you like, as long as it is supported by litellm. This means that now
**Bedrock, Together, Gemini and even more** are all supported by TextGrad!

This should be seen as experimental but we plan to depreciate the old engines in the future.

In addition to this, with the new engines it should be easy to enable and disable caching.

We are in the process of testing these new engines and deprecating the old engines. If you have any issues, please let us know!

The new litellm engines can be loaded with the following code:

An example of loading a litellm engine:
```python
engine = get_engine("experimental:gpt-4o", cache=False)

# this also works with

set_backward_engine("experimental:gpt-4o", cache=False)
```

Be sure to set the relevant environment variables for the new engines!

An example of forward pass:
```python

import httpx
from textgrad.engine_experimental.litellm import LiteLLMEngine

LiteLLMEngine("gpt-4o", cache=True).generate(content="hello, what's 3+4", system_prompt="you are an assistant")

image_url = "https://upload.wikimedia.org/wikipedia/commons/a/a7/Camponotus_flavomarginatus_ant.jpg"
image_data = httpx.get(image_url).content

LiteLLMEngine("gpt-4o", cache=True).generate(content=[image_data, "what is this my boy"], system_prompt="you are an assistant")
```

In the examples folder you will find two new notebooks that show how to use the new engines.


## QuickStart
If you know PyTorch, you know 80% of TextGrad.
Let's walk through the key components with a simple example. Say we want to use GPT-4o to solve a simple
Expand Down Expand Up @@ -96,9 +141,6 @@ answer
> :white_check_mark: **answer: It will still take 1 hour to dry 30 shirts under the sun,**
> **assuming they are all laid out properly to receive equal sunlight.**



We have many more examples around how TextGrad can optimize all kinds of variables -- code, solutions to problems, molecules, prompts, and all that!

### Tutorials
Expand All @@ -119,6 +161,8 @@ you need an OpenAI/Anthropic key to run the LLMs).

</div>



### Installation

You can install TextGrad using any of the following methods.
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,133 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"id": "2661032c-2d9b-43f5-b074-28c119b57a14",
"metadata": {},
"outputs": [],
"source": [
"import textgrad\n",
"import os\n",
"from textgrad.engine_experimental.openai import OpenAIEngine"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5e74c489-c47e-413c-adae-cd1201f6f94f",
"metadata": {},
"outputs": [],
"source": [
"os.environ[\"OPENAI_API_KEY\"] = \"SOMETHING\"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "50c02746-dd33-4cb0-896e-4771e4b76ed7",
"metadata": {},
"outputs": [],
"source": [
"OpenAIEngine(\"gpt-4o-mini\", cache=True).generate(content=\"hello, what's 3+4\", system_prompt=\"you are an assistant\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f0ac186e-5c34-4115-aeda-bbd301be2667",
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "code",
"execution_count": null,
"id": "b09f7e36-ae4f-4746-8548-c2c189827435",
"metadata": {},
"outputs": [],
"source": [
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2ccce563-3d11-4d20-9c72-05d43cce4f6c",
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "code",
"execution_count": null,
"id": "3053630a-dbf3-4d3e-b553-5c8ea73e2ccd",
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "code",
"execution_count": null,
"id": "58db7bb4-7f0f-4517-bba2-60a51b85908b",
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "code",
"execution_count": null,
"id": "7ee586ac-473e-4807-b66d-8524d08dc236",
"metadata": {},
"outputs": [],
"source": [
"import httpx\n",
"from textgrad.engine_experimental.litellm import LiteLLMEngine\n",
"\n",
"LiteLLMEngine(\"gpt-4o\", cache=True).generate(content=\"hello, what's 3+4\", system_prompt=\"you are an assistant\")\n",
"\n",
"image_url = \"https://upload.wikimedia.org/wikipedia/commons/a/a7/Camponotus_flavomarginatus_ant.jpg\"\n",
"image_data = httpx.get(image_url).content"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e5b1f1d5-8971-4dee-9958-c708ba807921",
"metadata": {},
"outputs": [],
"source": [
"LiteLLMEngine(\"gpt-4o\", cache=True).generate(content=[image_data, \"what is this my boy\"], system_prompt=\"you are an assistant\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "43d9b703-4488-4222-a7fb-773293c13514",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
Loading

0 comments on commit 5061dbe

Please sign in to comment.