Skip to content

jackmcguire1/alexa-chatgpt

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Alexa-ChatGPT

This repository contains the Alexa skill serverless backend to prompt generative ai LLM models

Go Report Card codecov

Logic

  • A user prompts the Alexa skill.
  • The Alexa skill will invoke the assigned Lambda with an 'AutoComplete' Intent.
  • The Lambda will push the user prompt to a SQS.
  • The request lambda will be invoked with the SQS message and begin to process the user's prompt via the chosen chat model [OpenAI ChatGPT , Google Gemini] and put the response onto a seperate SQS.
  • Meanwhile the Alexa skill lambda will be polling the response SQS in order to return the response for the prompt.

Caution

Due to the Alexa skill idle constraint of ~8 seconds, the following logic below, has been applied.

  • If the Alexa skill does not poll a message from the queue within ~7 seconds, users will be given a direct response of 'your response will be available shortly!', this is too avoid the Alexa skill session from expiring.

  • Querying the Alexa skill with 'Last Response', the lambda will immediately poll the response SQS to retrieve the delayed response and output the prompt with the timestamp of response time, if there is no item on the queue, however a previous answer to a prompt was cached, this will be returned instead.

Supported Models

Find available models by asking 'model available'

Check the model in-use by asking 'model which'

Note

Users are able to change which chat model is in use

OpenAI ChatGPT

  • user's can select this by prompting 'model gpt'

Google's GenerativeAI Gemini

  • user's can select this by prompting 'model gemini'

Cloudflare AI Workers

https://developers.cloudflare.com/workers-ai/models/

Use the 'alias' to select one of the models below:

  • llama-2-7b-chat-int8
    • alias "meta"
  • sqlcoder-7b-2
    • alias "sql"
  • llama-2-13b-chat-awq
    • alias "awq"
  • openchat-3.5-0106
    • alias "open chat"

Image models

Image Models used to generate images with AI Images are generated by prompting a model, b64 decoding the response into PNG, compress the image(s) further and finally uploading the results to S3.

  • Stable Diffusion

    • alias "stable"
  • DALL_E_3

    • alias "dallas

Alexa Intents

The Alexa Intents or phrases to interact with the Alexa Skill

  • AutoComplete

    the intent used to prompt the LLM models

  • Image

    the intent to generate images

  • Model

    Allows users to select LLM model to use

  • Last Response

    Fetch delayed LLM response to user's prompt

  • Cancel

    Force Alexa to await for next intent

  • Stop

    Terminate Alexa skill session

  • Help

    List all avalible interactions or intents

Infrastructure

Examples

SETUP

How to configure your Alexa Skill

Environment

we use handler env var to name the go binary either 'main' or 'bootstrap' for AL2.Provided purposes, devs should use 'main'

  HANDLER=main
  OPENAI_API_KEY=xxx
  GEMINI_API_KEY={base64 service account json}
  CLOUDFLARE_ACCOUNT_ID=xxx
  CLOUDFLARE_API_KEY=xxxx

Prerequisites

Make sure you configure the AWS CLI

  • AWS Access Key ID
  • AWS Secret Access Key
  • Default region 'us-east-1'
aws configure

Requirements

  • OPENAI API KEY

    • please set environment variables for your OPENAI API key

      export OPENAI_API_KEY=123456

  • Cloudflare AI Workers API KEY

    • fetch your cloudflare account ID and generate a cloudflare worker AI API KEY

      export CLOUDFLARE_ACCOUNT_ID=xxxx

      export CLOUDFLARE_API_KEY=xxxx

  • Google Service Account Credentials

    • create a Google Service Account JSON Credentials with access to vertex/generative ai, generate b64 string

      export GEMINI_API_KEY=xxxx

  • Create a S3 Bucket on your AWS Account

    • Set envrionment variable of the S3 Bucket name you have created [this is where AWS SAM]

      export S3_BUCKET_NAME=bucket_name

Deployment Steps

  1. Create a new Alexa skill with a name of your choice

  2. Set the Alexa skill invocation with a phrase i.e. 'My question'

  3. Set built-in invent invocations to their relevant phrases i.e. 'help', 'stop', 'cancel', etc.

  4. Create a new Intent named 'AutoCompleteIntent'

  5. Add a new Alexa slot to this Intent and name it 'prompt' with type AMAZON.SearchQuery'

  6. Add invocation phrase for the 'AutoCompleteIntent' with value 'question {prompt}'

  7. Deploy the stack to your AWS account.

  export ARCH=GOARCH=arm64
  export LAMBDA_RUNTIME=provided.al2023
  export LAMBDA_HANDLER=bootstrap
  export LAMBDA_ARCH=arm64
sam build --parameter-overrides Runtime=$LAMBDA_RUNTIME Handler=$LAMBDA_HANDLER Architecture=$LAMBDA_ARCH
sam deploy --stack-name chat-gpt --s3-bucket $S3_BUCKET --parameter-overrides Runtime=$LAMBDA_RUNTIME Handler=$LAMBDA_HANDLER Architecture=$LAMBDA_ARCH OpenAIApiKey=$OPENAI_API_KEY GeminiApiKey=$GEMINI_API_KEY --capabilities CAPABILITY_IAM

  1. Once the stack has deployed, make note of lambda ARN from the 'ChatGPTLambdaArn' field, from the the output of

    sam list stack-outputs --stack-name chat-gpt

  2. Apply this lambda ARN to your 'Default Endpoint' configuration within your Alexa skill, i.e. 'arn:aws:lambda:us-east-1:123456789:function:chatGPT'

  3. Begin testing your Alexa skill by querying for 'My question' or your chosen invocation phrase, Alexa should respond with "Hi, let's begin our conversation!"

  4. Query Alexa 'question {your sentence here}'

    Note the OpenAI API may take longer than 8 seconds to respond, in this scenario Alexa will tell you your answer will be ready momentarily, simply then ask Alexa 'last response'

  5. Tell Alexa to 'stop'

  6. Testing complete!

Contributors

This project exists thanks to all the people who contribute.

Donations

All donations are appreciated!

Donate

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages