Skip to content

Commit

Permalink
Update documentation and small refactor of exceptions.
Browse files Browse the repository at this point in the history
  • Loading branch information
blaney83 committed May 22, 2024
1 parent 6fc5b86 commit de893cb
Show file tree
Hide file tree
Showing 11 changed files with 129 additions and 41 deletions.
34 changes: 23 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ An enterprise-grade, Generative AI framework and utilities for AI-enabled applic
Primary objectives include:
* A login utility for API-based LLM services that integrates with Enterprise Auth Service (EAS). Simplified generation of the JWT token, which is then passed to the modeling service provider.
* Support for configuration-based outbound proxy support at the model level to allow integration with enterprise-level security requirements.
* A set of tools to make sure the generated prompts are safe to execute. This is done by adding hooks to the existing langchain packages.
* A set of tools to provide greater control over generated prompts. This is done by adding hooks to the existing langchain packages.

## Installation
```bash
Expand Down Expand Up @@ -73,7 +73,7 @@ logger = PrintLogger()
chain = prompt | logger.log() | model() | logger.log()
```

There is a portable solution for the "regular" prompt template-based requests. It is portable, i.e. no need to directly import a model provider package (e.g. `openai`). It is also safe, i.e. the prompts are validated before being sent to the LLM.
There is a portable solution for the "regular" prompt template-based requests. It is portable, i.e. no need to directly import a model provider package (e.g. `openai`). Additionally, prompts can be validated before being sent to the LLM.

```python
from connectchain.orchestrators import PortableOrchestrator
Expand Down Expand Up @@ -114,16 +114,18 @@ llm = AzureOpenAI(
chain = LLMChain(llm=llm, prompt=prompt)
```

### `connectchain.prompts`: A package to generate safe prompts to LLM. Can define your own function to be called to verify the prompt is safe.
### `connectchain.prompts`: A package to provide greater control over generated prompts before they are passed to the LLM by providing an entrypoint for sanitizer implementations.

```python
from connectchain.prompts import ValidPromptTemplate

class OperationNotPermittedException(Exception):
pass

from connectchain.utils.exceptions import OperationNotPermittedException

def my_sanitizer(query: str) -> str:
"""IMPORTANT: This is a simplified example designed to showcase concepts and should not used
as a reference for production code. The features are experimental and may not be suitable for
use in sensitive environments or without additional safeguards and testing.
Any use of this code is at your own risk."""
pattern = r'BADWORD'

if re.search(pattern, query):
Expand All @@ -147,14 +149,19 @@ print(output)

```
### `connectchain.chains`: An extension of the langchain chains.
We add hooks to make sure only safe code is executed. Just as with prompts, you can define your own function to be called to verify the code is safe.
We add hooks to improve control over code that is executed by providing an entrypoint for sanitizer implementations.

```python
from connectchain.chains import ValidLLMChain

def my_sanitizer(query: str) -> str:
"""IMPORTANT: This is a simplified example designed to showcase concepts and should not used
as a reference for production code. The features are experimental and may not be suitable for
use in sensitive environments or without additional safeguards and testing.
Any use of this code is at your own risk."""
# define your own logic here.
# for example, can call an API to check if the code is safe to execute
# for example, can call an API to verify the content of the code
pass

chain = ValidLLMChain(llm=llm, prompt=prompt, output_sanitizer=my_sanitizer)
Expand All @@ -170,14 +177,19 @@ except OperationNotPermittedException as e:
```

### `connectchain.tools`: An extension of the langchain tools.
We add hooks to make sure only safe code is executed. Just as is the case with the prompts, you can define your own function to be called to verify the code is safe.
We add hooks to improve control over code that is executed by providing an entrypoint for sanitizer implementations.

```python
from connectchain.tools import ValidPythonREPLTool

def my_sanitizer(query: str) -> str:
"""IMPORTANT: This is a simplified example designed to showcase concepts and should not used
as a reference for production code. The features are experimental and may not be suitable for
use in sensitive environments or without additional safeguards and testing.
Any use of this code is at your own risk."""
# define your own logic here.
# for example, can call an API to check if the code is safe to execute
# for example, can call an API to verify the content of the code
pass

agent_executor = create_python_agent(
Expand Down
18 changes: 18 additions & 0 deletions connectchain/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,3 +10,21 @@
# or implied. See the License for the specific language governing permissions and limitations under
# the License.
"""Init file for the connectchain package"""
from .utils.exceptions import ConnectChainNoAccessException
from langchain.chains.api.base import APIChain

# Disable langchain APIChain
def override(self, *args, **kwargs):
raise ConnectChainNoAccessException("Operation not permitted")

APIChain.__init__ = override
APIChain.from_llm_and_api_docs = override
APIChain.run = override
APIChain.arun = override
APIChain.invoke = override
APIChain.ainvoke = override
APIChain.apply = override
APIChain.batch = override
APIChain.abatch = override
APIChain._call = override
APIChain._acall = override
19 changes: 15 additions & 4 deletions connectchain/examples/langchain_chains.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,19 +11,30 @@
# the License.
"""
Example usage for the ValidLLMChain class.
IMPORTANT: This is a simplified example designed to showcase concepts and should not used
as a reference for production code. The features are experimental and may not be suitable for
use in sensitive environments or without additional safeguards and testing.
Any use of this code is at your own risk.
"""

from dotenv import load_dotenv, find_dotenv
from langchain.prompts import PromptTemplate
from connectchain.chains import ValidLLMChain
from connectchain.lcel import model

class OperationNotPermittedException(BaseException):
"""Operation Not Permitted Exception"""
from connectchain.utils.exceptions import OperationNotPermittedException


def my_sanitizer(query: str) -> str:
"""Sample sanitizer"""
"""Sample sanitizer
IMPORTANT: This is a simplified example designed to showcase concepts and should not used
as a reference for production code. The features are experimental and may not be suitable for
use in sensitive environments or without additional safeguards and testing.
Any use of this code is at your own risk.
"""
if query == "BADWORD":
raise OperationNotPermittedException(f"Illegal execution detected: {query}")
return query
Expand Down
24 changes: 18 additions & 6 deletions connectchain/examples/langchain_prompts.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,23 +9,35 @@
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
# or implied. See the License for the specific language governing permissions and limitations under
# the License.
"""Example for using a custom sanitizer"""
"""Example for using a custom sanitizer.
IMPORTANT: This is a simplified example designed to showcase concepts and should not used
as a reference for production code. The features are experimental and may not be suitable for
use in sensitive environments or without additional safeguards and testing.
Any use of this code is at your own risk.
"""
import re

from dotenv import load_dotenv, find_dotenv
from langchain.chains import LLMChain
from connectchain.prompts import ValidPromptTemplate
from connectchain.lcel import model
from connectchain.utils.exceptions import OperationNotPermittedException


if __name__ == '__main__':
load_dotenv(find_dotenv())

class OperationNotPermittedException(Exception):
"""Operation Not Permitted Exception"""
def example_sanitizer(query: str) -> str:
"""Sample sanitizer
IMPORTANT: This is a simplified example designed to showcase concepts and should not used
as a reference for production code. The features are experimental and may not be suitable for
use in sensitive environments or without additional safeguards and testing.
def my_sanitizer(query: str) -> str:
"""Sample sanitizer"""
Any use of this code is at your own risk.
"""
pattern = r'BADWORD'

if re.search(pattern, query):
Expand All @@ -36,7 +48,7 @@ def my_sanitizer(query: str) -> str:

prompt_template = "Tell me about {adjective} books"
prompt = ValidPromptTemplate(
output_sanitizer=my_sanitizer,
output_sanitizer=example_sanitizer,
input_variables=["adjective"],
template=prompt_template
)
Expand Down
27 changes: 17 additions & 10 deletions connectchain/examples/langchain_tools.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,14 @@
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
# or implied. See the License for the specific language governing permissions and limitations under
# the License.
"""Example for using a code generation tool"""
"""Example for using a code generation tool.
IMPORTANT: This is a simplified example designed to showcase concepts and should not used
as a reference for production code. The features are experimental and may not be suitable for
use in sensitive environments or without additional safeguards and testing.
Any use of this code is at your own risk.
"""
#pylint: disable=no-name-in-module
import re
from dotenv import load_dotenv, find_dotenv
Expand All @@ -18,7 +25,7 @@
from langchain.chat_models import ChatOpenAI as AzureOpenAI
from connectchain.tools import ValidPythonREPLTool
from connectchain.utils import get_token_from_env, Config

from connectchain.utils.exceptions import OperationNotPermittedException

if __name__ == '__main__':
load_dotenv(find_dotenv())
Expand All @@ -35,15 +42,15 @@
"api_type": "azure"
})

"""Example for using a custom sanitizer"""


class OperationNotPermittedException(Exception):
"""Operation Not Permitted Exception"""


def simple_sanitizer(query: str) -> str:
"""Sample sanitizer"""
"""Sample sanitizer
IMPORTANT: This is a simplified example designed to showcase concepts and should not used
as a reference for production code. The features are experimental and may not be suitable for
use in sensitive environments or without additional safeguards and testing.
Any use of this code is at your own risk.
"""
query = re.sub(r"^(\s|`)*(?i:python)?\s*", "", query)
query = re.sub(r"(\s|`)*$", "", query)

Expand Down
9 changes: 8 additions & 1 deletion connectchain/examples/langchain_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,14 @@
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
# or implied. See the License for the specific language governing permissions and limitations under
# the License.
"""Example of using the langchain package to create a language chain."""
"""Example of using the langchain package to create a language chain.
IMPORTANT: This example is a simplified example designed to showcase concepts and should not used
as a reference for production code. The features are experimental and may not be suitable for
use in sensitive environments or without additional safeguards and testing.
Any use of this code is at your own risk.
"""
# pylint: disable=no-name-in-module
from dotenv import load_dotenv, find_dotenv
from langchain.agents import AgentType
Expand Down
2 changes: 1 addition & 1 deletion connectchain/test/test_valid_llm_chain.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@
from langchain.llms.openai import OpenAIChat
from langchain.prompts import PromptTemplate
from connectchain.chains import ValidLLMChain
from .test_exception import OperationNotPermittedException
from connectchain.utils.exceptions import OperationNotPermittedException


def my_sanitizer(query: str) -> str:
Expand Down
6 changes: 1 addition & 5 deletions connectchain/test/test_valid_prompt_template.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,11 +15,7 @@
from unittest import TestCase
import re
from connectchain.prompts import ValidPromptTemplate


class OperationNotPermittedException(Exception):
"""Operation Not Permitted Exception"""

from connectchain.utils.exceptions import OperationNotPermittedException

class TestValidPromptTemplate(TestCase):
"""Test Class for ValidPromptTemplate"""
Expand Down
11 changes: 9 additions & 2 deletions connectchain/tools/validated_python_repl.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
# or implied. See the License for the specific language governing permissions and limitations under
# the License.
"""A version of PythonREPLTool that sanitizes the input before running it in the REPL"""
"""A version of PythonREPLTool that sanitizes the input before running it in the REPL."""
#pylint: disable=no-name-in-module too-few-public-methods unused-argument
import re
from typing import Any, Callable, Optional
Expand All @@ -18,7 +18,14 @@


def default_sanitize_input(query: str) -> str:
"""Sanitize the input by removing leading and trailing spaces and `python` keyword"""
"""Example sanitizer; modifies the input by removing leading and trailing spaces and `python` keyword
IMPORTANT: This is a simplified example designed to showcase concepts and should not used
as a reference for production code. The features are experimental and may not be suitable for
use in sensitive environments or without additional safeguards and testing.
Any use of this code is at your own risk.
"""
query = re.sub(r"^(\s|`)*(?i:python)?\s*", "", query)
query = re.sub(r"(\s|`)*$", "", query)
return query
Expand Down
18 changes: 18 additions & 0 deletions connectchain/utils/exceptions.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
# Copyright 2024 American Express Travel Related Services Company, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except
# in compliance with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software distributed under the License
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
# or implied. See the License for the specific language governing permissions and limitations under
# the License.
"""ConnectChain exceptions."""

class OperationNotPermittedException(Exception):
"""Operation Not Permitted Exception"""

class ConnectChainNoAccessException(BaseException):
"ConnectChain does not allow access to this class or method."
2 changes: 1 addition & 1 deletion requirements.txt
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
aiohttp==3.9.0
aiohttp>=3.9.2
openai==0.28.0
pyyaml==6.0.1
SQLAlchemy==2.0.22
Expand Down

0 comments on commit de893cb

Please sign in to comment.