You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I was wondering if we could fine-tune LLaMa with our own training data and then apply this to transform it into Alpaca and it would work, or would it be better to fine-tune Alpaca directly? Is it possible at all?
The text was updated successfully, but these errors were encountered:
Alpaca gives you a much better performance than raw LLaMA, so unless you have a very good dataset, it would make sense to further finetune Alpaca on your data.
Meaning, if you have just a few json files, it definitely doesn't make sense to tune LLaMA on it, it will probably be worse than the base model.
sswam
added a commit
to sswam/barbarella
that referenced
this issue
Sep 22, 2024
- Create new file with improved task pointnetwork#1 prompt
- Add new task pointnetwork#2 prompt for structured technical overview
- Include additional task ideas based on git log analysis
I was wondering if we could fine-tune LLaMa with our own training data and then apply this to transform it into Alpaca and it would work, or would it be better to fine-tune Alpaca directly? Is it possible at all?
The text was updated successfully, but these errors were encountered: