Skip to content

Commit

Permalink
write_to_file, add readme and banner
Browse files Browse the repository at this point in the history
  • Loading branch information
lalalune committed Jul 22, 2023
1 parent ab40f8f commit e589689
Show file tree
Hide file tree
Showing 4 changed files with 76 additions and 191 deletions.
249 changes: 67 additions & 182 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# agentlogger <a href="https://discord.gg/qetWd7J9De"><img style="float: right" src="https://dcbadge.vercel.app/api/server/qetWd7J9De" alt=""></a>

Easy text completion and function calling using the OpenAI API. Also includes useful utilities for counting tokens, composing prompts and trimming them to fit within the token limit.
Simple, colorful terminal logs and logfiles.

<img src="resources/image.jpg">

Expand All @@ -10,227 +10,112 @@ Easy text completion and function calling using the OpenAI API. Also includes us
pip install agentlogger
```

# Quickstart
## Quickstart

```python
from agentlogger import openai_function_call, openai_text_call, compose_prompt

# Compose a function object
test_function = compose_function(
name="write_song",
description="Write a song about AI",
properties={
"lyrics": {
"type": "string",
"description": "The lyrics for the song",
}
},
required_properties: ["lyrics"],
)

# Call the function
response = openai_function_call(text="Write a song about AI", functions=[test_function], function_call="write_song")

# Print the response
print(response["arguments"]["lyrics"])
```

# Basic Usage

## Compose Prompt

You can compose a prompt using {{handlebars}} syntax
Here is a quick overview of how you can use Agent Logger in your project:

```python
test_prompt = "Don't forget your {{object}}"
test_dict = {"object": "towel"}
prompt = compose_prompt(test_prompt, test_dict)
# prompt = "Don't forget your towel"
```

## Text Completion

Send text, get a response as a text string

```python
from agentlogger import openai_text_call
response = openai_text_call("Hello, how are you?")
# response["text"] = "As an AI language model, I don't have feelings, but...""
```
from agentlogger import log, print_header, write_to_file

## Compose a Function
# Print a styled log message to the console
log('Test message', type='info')
# ╭─ (info) agentlogger ─╮
# │ Test message │
# ╰──────────────────────╯

Compose a function to pass into the function calling API
# Display a big styled header in the console
print_header('Test header', font='slant', color='blue')
# ______ __ __ __
# /_ __/__ _____/ /_ / /_ ___ ____ _____/ /__ _____
# / / / _ \/ ___/ __/ / __ \/ _ \/ __ `/ __ / _ \/ ___/
# / / / __(__ ) /_ / / / / __/ /_/ / /_/ / __/ /
# /_/ \___/____/\__/ /_/ /_/\___/\__,_/\__,_/\___/_/

```python
from agentlogger import compose_function

test_function = compose_function(
name="write_song",
description="Write a song about AI",
properties={
"lyrics": {
"type": "string",
"description": "The lyrics for the song",
}
},
required_properties: ["lyrics"],
)
```

## Function Completion

Send text and a list of functions and get a response as a function call
# Write a log message to a file
write_to_file('More log content', source='tests.py', type='test_write_to_file')
# ======================== tests.py: test_write_to_file ========================

```python
from agentlogger import openai_function_call, compose_function
# More log content

# NOTE: test_function is a function object created using compose_function in the example above...
# ================================================================================

response = openai_function_call(text="Write a song about AI", functions=[test_function], function_call="write_song")
# Response structure is { "text": string, "function_name": string, "arguments": dict }
print(response["arguments"]["lyrics"])
```

# Advanced Usage
## Documentation

### `compose_function(name, description, properties, required_properties)`
Here is an overview of the available functions:

Composes a function object for OpenAI API.
### `log(content, source=None, title="agentlogger", type="info", color="blue", type_colors=DEFAULT_TYPE_COLORS, expand=False, panel=True, log=True)`

```python
summarization_function = compose_function(
name="summarize_text",
description="Summarize the text. Include the topic, subtopics.",
properties={
"summary": {
"type": "string",
"description": "Detailed summary of the text.",
},
},
required_properties=["summary"],
)
```
This function is used to create an event with provided metadata and saves it to the event log file.

### `openai_text_call(text, model_failure_retries=5, model=None, chunk_length=DEFAULT_CHUNK_LENGTH, api_key=None)`
Arguments:

Sends text to the OpenAI API and returns a text response.
- `content`: Content of the event.
- `source`: Source of the event, e.g. a function name. Defaults to None.
- `title`: Title of the event. Defaults to "agentlogger".
- `type`: Type of the event. Defaults to "info".
- `type_colors`: Dictionary with event types as keys and colors as values. Defaults to a predefined dictionary.
- `expand`: Determines if the output should be within a Panel. Defaults to False.
- `panel`: Determines if the output should be displayed inside a bordered box panel. Defaults to True.
- `log`: Determines if the output should be logged. Defaults to True.

```python
response = openai_text_call(
"Hello, how are you?",
model_failure_retries=3,
model='gpt-3.5-turbo',
chunk_length=1024,
api_key='your_openai_api_key'
)
```
### `print_header(text="agentlogger", font="slant", color="yellow", width=console.width, justify="left")`

The response object looks like this:

```json
{
"text": "string",
"usage": {
"prompt_tokens": "number",
"completion_tokens": "number",
"total_tokens": "number"
},
"error": "string|None",
"finish_reason": "string"
}
```
This function displays a header with the provided text and color.

### `openai_function_call(text, functions=None, model_failure_retries=5, function_call=None, function_failure_retries=10, chunk_length=DEFAULT_CHUNK_LENGTH, model=None, api_key=None)`
Arguments:

Sends text and a list of functions to the OpenAI API and returns optional text and a function call. The function call is validated against the functions array.
- `text`: Text to be displayed in the header. Defaults to "agentlogger".
- `font`: Font to be used in the header. Defaults to "slant".
- `color`: Color to be used in the header. Defaults to "yellow".
- `width`: Width of the console. Defaults to the console width.
- `justify`: Justification of the text in the header. Defaults to "left".

```python
function = {
'name': 'function1',
'parameters': {'param1': 'value1'}
}
### `write_to_file(content, source=None, type=None, filename="events.log", separator_width=80)`

response = openai_function_call("Call the function.", function)
```
This function writes content to the event log file.

The response object looks like this:

```json
{
"text": "string",
"function_name": "string",
"arguments": "dict",
"usage": {
"prompt_tokens": "number",
"completion_tokens": "number",
"total_tokens": "number"
},
"finish_reason": "string",
"error": "string|None"
}
```

### `trim_prompt(text, max_tokens=DEFAULT_CHUNK_LENGTH, model=DEFAULT_TEXT_MODEL, preserve_top=True)`

Trim the given text to a maximum number of tokens.
Arguments:

```python
trimmed_text = trim_prompt("This is a test.", 3, preserve_top=True)
```
- `content`: Content to be written in the log file.
- `source`: Source of the event, e.g. a function name. Defaults to None.
- `type`: Type of the event. Defaults to None.
- `filename`: Name of the file where the content will be written. Defaults to "events.log".
- `separator_width`: Width of the separator. Defaults to 80.

### `chunk_prompt(prompt, chunk_length=DEFAULT_CHUNK_LENGTH)`
## Examples

Split the given prompt into chunks where each chunk has a maximum number of tokens.
Here are a few examples of how you can use this library:

```python
prompt_chunks = chunk_prompt("This is a test. I am writing a function.", 4)
```
# Log an info message to the console
log('Application started', type='info')

### `count_tokens(prompt, model=DEFAULT_TEXT_MODEL)`
# Log a warning message to the console
log('Low on disk space', type='warning')

Count the number of tokens in a string.
# Log an error message to the console without a panel
log('Failed to connect to the database', type='error', panel=False)

```python
num_tokens = count_tokens("This is a test.")
```

### `get_tokens(prompt, model=DEFAULT_TEXT_MODEL)`
# Display a big styled header
print_header('Welcome to My Application')

Returns a list of tokens in a string.

```python
tokens = get_tokens("This is a test.")
# Write a log message to a file
write_to_file('User logged in', source='auth.py', type='info')
```

### `compose_prompt(prompt_template, parameters)`
## Tests

Composes a prompt using a template and parameters. Parameter keys are enclosed in double curly brackets and replaced with parameter values.

```python
prompt = compose_prompt("Hello {{name}}!", {"name": "John"})
```

## A note about models

You can pass in a model using the `model` parameter of either openai_function_call or openai_text_call. If you do not pass in a model, the default model will be used. You can also override this by setting the environment model via `OPENAI_MODEL` environment variable.

Default model is gpt-turbo-3.5-0613.

## A note about API keys

You can pass in an API key using the `api_key` parameter of either openai_function_call or openai_text_call. If you do not pass in an API key, the `OPENAI_API_KEY` environment variable will be checked.

# Publishing
You can run tests using pytest:

```bash
bash publish.sh --version=<version> --username=<pypi_username> --password=<pypi_password>
pytest test.py
```

# Contributions Welcome

If you like this library and want to contribute in any way, please feel free to submit a PR and I will review it. Please note that the goal here is simplicity and accesibility, using common language and few dependencies.

# Questions, Comments, Concerns
If you like this library and want to contribute in any way, please feel free to submit a PR and it will be reviewed. The goal of this project is simplicity and accessibility using plain language and sane defaults, so please keep that in mind when submitting a PR.

If you have any questions, please feel free to reach out to me on [Twitter](https://twitter.com/spatialweeb) or Discord @new.moon
<img src="resources/youcreatethefuture.jpg">
4 changes: 2 additions & 2 deletions agentlogger/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -86,12 +86,12 @@ def print_header(
print(colored(ascii_logo, color))


def write_to_log(
def write_to_file(
content, source=None, type=None, filename="events.log", separator_width=80
):
"""
Writes content to the event log file.
Arguments:
- content: String to be written in the log file
- source: Source of the event, e.g. a function name or file
Expand Down
14 changes: 7 additions & 7 deletions agentlogger/tests.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
from agentlogger import (
log,
print_header,
write_to_log,
write_to_file,
) # replace 'your_module' with the actual module name


Expand All @@ -22,9 +22,9 @@ def test_print_header():
print_header("Test header", font="banner", color="green")


def test_write_to_log():
def test_write_to_file():
filename = "test_events.log"
write_to_log(
write_to_file(
"Test log content", source="test_source", type="info", filename=filename
)

Expand All @@ -36,9 +36,9 @@ def test_write_to_log():
assert not os.path.exists(filename)


def test_write_to_log_no_type_no_source():
def test_write_to_file_no_type_no_source():
filename = "test_events.log"
write_to_log("Test log content", type="log", filename=filename)
write_to_file("Test log content", type="log", filename=filename)

# assert that the file now exists
assert os.path.exists(filename)
Expand All @@ -48,10 +48,10 @@ def test_write_to_log_no_type_no_source():
print()
print(f.read())

write_to_log(
write_to_file(
"More log content",
source="tests.py",
type="test_write_to_log_no_type_no_source",
type="test_write_to_file_no_type_no_source",
filename=filename,
)

Expand Down
Binary file added resources/youcreatethefuture.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

0 comments on commit e589689

Please sign in to comment.