You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
After executing examples/quantization_2of4_sparse_w4a16/llama7b_sparse_w4a16.py, I obtained three models at different stages: stage_sparsity, stage_finetuning, and stage_quantization. My question is, if an error occurs while running the stage_quantization phase, how can I resume the process from the stage_finetuning model? Given that both the stage_sparsity and stage_finetuning phases are resource-intensive, restarting from the beginning would be inefficient and time-consuming.
The text was updated successfully, but these errors were encountered:
jiangjiadi
changed the title
How to Recover Stage Quantization from Finetuning Stage After an Error
How to recover stage quantization from finetuning stage after an error
Dec 5, 2024
Hi @jiangjiadi if you update your recipe with the remaining quantization stage and start with the finetuned model, you should be able to run quantization without repeating the other stages. Did you run into an issue while loading the finetuned model?
After executing
examples/quantization_2of4_sparse_w4a16/llama7b_sparse_w4a16.py
, I obtained three models at different stages:stage_sparsity
,stage_finetuning
, andstage_quantization
. My question is, if an error occurs while running thestage_quantization
phase, how can I resume the process from thestage_finetuning
model? Given that both thestage_sparsity
andstage_finetuning
phases are resource-intensive, restarting from the beginning would be inefficient and time-consuming.The text was updated successfully, but these errors were encountered: