Skip to content

Problem with training LoRA for Model "TheBloke/Pygmalion-2-13B-GPTQ" #5200

Closed Answered by araleza
DasBinNichtIch asked this question in Q&A
Discussion options

You must be logged in to vote

You can perform LoRA training on 4 bit GPTQ models, but you have to load them with the Transformers model loader, not any of the other ones. If you load the model with (e.g.) ExLlamav2_HF, you'll get that error message that you've shown here.

The docs say you should tick the 'auto-devices' and 'disable_exllama' options when Loading the model with the Transformers loader in order to perform LoRA training.

Replies: 4 comments 7 replies

Comment options

You must be logged in to vote
6 replies
@araleza
Comment options

@jaqenwang
Comment options

@jaqenwang
Comment options

@araleza
Comment options

@jaqenwang
Comment options

Answer selected by DasBinNichtIch
Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
1 reply
@DasBinNichtIch
Comment options

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
3 participants