Replies: 2 comments 2 replies
-
In the README, under datasets where you specify the data you wish to train, you can specify a prompt type for each dataset individually. This gives you flexibility in mixing and matching your datasets. |
Beta Was this translation helpful? Give feedback.
0 replies
-
How is the prompt format under inference mode? For example, if I do a finetuning on Mistral-7B-v0.1 using Alpaca format, how is the prompt format on inference? |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
From the .yaml configuration file, it doesn't ask the expected prompt format.
Beta Was this translation helpful? Give feedback.
All reactions