Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to recover stage quantization from finetuning stage after an error #957

Open
jiangjiadi opened this issue Dec 5, 2024 · 4 comments
Open
Assignees
Labels
bug Something isn't working

Comments

@jiangjiadi
Copy link

After executing examples/quantization_2of4_sparse_w4a16/llama7b_sparse_w4a16.py, I obtained three models at different stages: stage_sparsity, stage_finetuning, and stage_quantization. My question is, if an error occurs while running the stage_quantization phase, how can I resume the process from the stage_finetuning model? Given that both the stage_sparsity and stage_finetuning phases are resource-intensive, restarting from the beginning would be inefficient and time-consuming.

@jiangjiadi jiangjiadi added the bug Something isn't working label Dec 5, 2024
@jiangjiadi jiangjiadi changed the title How to Recover Stage Quantization from Finetuning Stage After an Error How to recover stage quantization from finetuning stage after an error Dec 5, 2024
@dsikka dsikka self-assigned this Dec 5, 2024
@dsikka
Copy link
Collaborator

dsikka commented Dec 10, 2024

Hi @jiangjiadi if you update your recipe with the remaining quantization stage and start with the finetuned model, you should be able to run quantization without repeating the other stages. Did you run into an issue while loading the finetuned model?

@jiangjiadi
Copy link
Author

jiangjiadi commented Dec 11, 2024

@dsikka When I start with the stage_finetuning model, I get this error.
image

The recipe I used only contains quantization_stage:

quantization_stage:
  run_type: oneshot
  quantization_modifiers:
    GPTQModifier:
      ignore: ["lm_head"]
      dampening_frac: 0.01
      config_groups:
        group_0:
          weights:
            num_bits: 4
            type: "int"
            symmetric: true
            strategy: "channel"
          targets: ["Linear"]

@dsikka
Copy link
Collaborator

dsikka commented Dec 11, 2024

Hi @jiangjiadi if you update to use transformers main or the latest release, this should fix the issue.

@jiangjiadi
Copy link
Author

jiangjiadi commented Dec 12, 2024

@dsikka I upgraded the version of transformers to 4.47.0 and found that the model fails to load the weights.
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants