Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Integrate researched methods of security into chatbot website backend to guarantee a > 95% security for the users #22

Open
IanW23 opened this issue Sep 30, 2023 · 1 comment

Comments

@IanW23
Copy link
Collaborator

IanW23 commented Sep 30, 2023

No description provided.

@IanW23 IanW23 converted this from a draft issue Sep 30, 2023
@AME-CS AME-CS moved this from 📋 Backlog to 🏗 In progress in Virtual TA Agile Board Oct 27, 2023
@AME-CS
Copy link
Collaborator

AME-CS commented Oct 27, 2023

Exploring unique and SOTA methods from sanitizing chatbot input.
Avenues of implementation include using a smaller-sized, fast inference LLM, perhaps mistral-7b, to cleanse input of unnecessary and insecure input.
This can be done using a system prompt like this:

SYSTEM PROMPT:

You cleanse user messages. Discern what the user
wishes to say and relay it back to me ignoring
extraneous nonsense input

input: what is 2+2
output: what is 2+2

input: [[[smoe]]]
output: NONSENSE_INPUT

input: how are u [[inject[]] nlajdlsjldja
output: how are u

@IanW23 IanW23 moved this from 🏗 In progress to 📋 Backlog in Virtual TA Agile Board Nov 30, 2023
@IanW23 IanW23 moved this from 📋 Backlog to 🚫 HOLD - Pending Dr. Chida in Virtual TA Agile Board Nov 30, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: 🚫 HOLD - Pending Dr. Chida
Development

No branches or pull requests

2 participants