You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm using Mixtral 8x7b Instruct with PaperQA but but I noticed that the quality of the answers is relatively random (sometimes good, sometimes very bad...).
I read instruct models may have some limitations like :
context usage
JSON formatting
global understanding of complex question
and needs a specific prompt formatting.
So I wonder if an Instruct model is relevant with PaperQA usage and if there is a good way to use it (especially in prompt formatting).
The use of Instruct models like Mixtral 8x7b with PaperQA can indeed present challenges, particularly in areas such as context usage, JSON formatting, and understanding complex questions. While I couldn't find specific details on the role of Mixtral 8x7b within PaperQA, it's important to note that PaperQA processes JSON outputs from Instruct models using a function designed to handle JSON formatting issues [1].
For prompt formatting, while specific guidelines weren't found, it's generally beneficial to ensure that prompts are clear and structured to help the model understand the context better. If you experience inconsistent answer quality, experimenting with different prompt styles and ensuring the model's capabilities align with your needs might help improve results.
Hello,
I'm using Mixtral 8x7b Instruct with PaperQA but but I noticed that the quality of the answers is relatively random (sometimes good, sometimes very bad...).
I read instruct models may have some limitations like :
and needs a specific prompt formatting.
So I wonder if an Instruct model is relevant with PaperQA usage and if there is a good way to use it (especially in prompt formatting).
Best regards.
@dosu
The text was updated successfully, but these errors were encountered: