Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The assistant is answering irrelevant questions than what its been trained for, how can we prevent the assistant from getting deviated? #44

Open
srirampriyanka opened this issue Dec 27, 2024 · 1 comment

Comments

@srirampriyanka
Copy link

First of all, thank you for such a great AI agent example in java. This is my first time trying out AI agent and I am planning to customize it for my needs.

I have a question though. When I asked the funnair assistant about helping with writing a job description for software developer, it went ahead and provided me with an answer instead of rejecting it and stating its a funnair bot and can only help with travel related questions.

example:

🧑‍💻 You
write a job decription for software developer

🤖 Assistant
While I'm here to help with all your Funnair-related inquiries, I can certainly give you a basic idea for a job description for a software developer! Here’s a general template you might find useful:

Job Title: Software Developer

Location: [City, State or Remote]

Job Type: [Full-Time/Part-Time/Contract]

Introduction: At [Company Name], we are seeking a skilled Software Developer to join our innovative technology team. This role involves designing, coding, and improving our software applications that meet the needs of our customers and drive our company forward.

Responsibilities:
and so on...

Can you please throw some light on how can we prevent the assistant from answering irrelevant questions ?

Thanks in advance.

@marcushellberg
Copy link
Owner

Hey! Good question. This project initially grew out of a demo I wrote during a live-coding session, which meant I wanted to keep things simple instead of feature complete.

There are several things we could to to improve the example, making it more suitable for real-world scenarios. Essentially, we'd want to create a pipeline where we process the query before sending it over to the LLM. Something like:

user asks question -> rewrite question into a standalone form -> verify that the question is relevant to the scope of the assistant -> fetch relevant documents (this may involve several steps depending on how we go about it) -> call the LLM with the relevant docs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants