-
Notifications
You must be signed in to change notification settings - Fork 2.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Question]: ernie-3.0模型在按照文档使用paddleslim模型压缩遇到data_loader cannot be None. #9497
Labels
question
Further information is requested
Comments
请提供下你的压缩脚本? |
这是压缩脚本
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
请提出你的问题
我的版本:
paddlenlp 3.0.0.beta2
paddlepaddle-gpu 3.0.0.beta2
paddleslim 2.6.0
一开始时
问题:
执行模型压缩的时候,会遇到报错:
Traceback (most recent call last):
File "/root/workspace/uie_compress/compress_token_cls.py", line 102, in
main()
File "/root/workspace/uie_compress/compress_token_cls.py", line 98, in main
trainer.compress()
File "/opt/conda/lib/python3.10/site-packages/paddlenlp/trainer/trainer_compress.py", line 73, in compress
_dynabert(self, self.model)
File "/opt/conda/lib/python3.10/site-packages/paddlenlp/trainer/trainer_compress.py", line 158, in _dynabert
ofa_model, teacher_model = _dynabert_init(self, model, eval_dataloader)
File "/opt/conda/lib/python3.10/site-packages/paddlenlp/trainer/trainer_compress.py", line 300, in _dynabert_init
head_importance, neuron_importance = compute_neuron_head_importance(
File "/opt/conda/lib/python3.10/site-packages/paddlenlp/transformers/ofa_utils.py", line 307, in compute_neuron_head_importance
logits = model(**batch)
File "/opt/conda/lib/python3.10/site-packages/paddle/nn/layer/layers.py", line 1429, in call
return self.forward(*inputs, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/paddlenlp/transformers/ernie/modeling.py", line 709, in forward
outputs = self.ernie(
File "/opt/conda/lib/python3.10/site-packages/paddle/nn/layer/layers.py", line 1429, in call
return self.forward(*inputs, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/paddlenlp/trainer/trainer_compress.py", line 907, in auto_model_dynabert_forward
embedding_output = self.embeddings(**embedding_kwargs)
File "/opt/conda/lib/python3.10/site-packages/paddle/nn/layer/layers.py", line 1429, in call
return self.forward(*inputs, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/paddlenlp/transformers/ernie/modeling.py", line 127, in forward
embeddings = self.layer_norm(embeddings)
File "/opt/conda/lib/python3.10/site-packages/paddle/nn/layer/layers.py", line 1429, in call
return self.forward(*inputs, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/paddleslim/nas/ofa/layers.py", line 1301, in forward
out, _, _ = paddle._C_ops.layer_norm(
ValueError: too many values to unpack (expected 3)
后面我根据这个错误,在paddleslim找到这个解决方案
PaddlePaddle/PaddleSlim#1893
按照方案
修改[paddleslim/nas/ofa/layers.py] 1301行
if paddle.in_dynamic_mode():
从 out, _, _ = paddle._C_ops.layer_norm(
修改为 out = paddle._C_ops.layer_norm(
修改代码后,后续继续压缩,是可以正常启动 Teacher's evaluation,但是当 evaluation批次走完时,报错data_loader cannot be None.,
下面是是日志:
[2024-11-26 16:04:30,599] [ INFO] - acc: 0.7120018535681186,
[2024-11-26 16:04:30,601] [ INFO] - eval done total: 5.014528751373291 s
[2024-11-26 16:04:35,270] [ INFO] - global step 2000, epoch: 0, batch: 1999, lr: 2.267e-06, loss: 0.556222, speed: 10.34 step/s
[2024-11-26 16:04:35,270] [ INFO] - width_mult 0.75:
[2024-11-26 16:04:40,192] [ INFO] - acc: 0.7133920296570899,
[2024-11-26 16:04:40,193] [ INFO] - eval done total: 4.922893762588501 s
[2024-11-26 16:04:44,840] [ INFO] - global step 2100, epoch: 0, batch: 2099, lr: 7.143e-07, loss: 0.581420, speed: 10.47 step/s
[2024-11-26 16:04:44,840] [ INFO] - width_mult 0.75:
[2024-11-26 16:04:49,754] [ INFO] - acc: 0.7154772937905468,
[2024-11-26 16:04:49,756] [ INFO] - eval done total: 4.916363000869751 s
[2024-11-26 16:04:51,828] [ INFO] - Best result of width_mult 0.75: 0.7196
F:\anaconda\envs\PaddleNLP\Lib\site-packages\paddle\jit\dy2static\program_translator.py:747: UserWarning: full_graph=False don't support input_spec arguments. It will not produce any effect.
You can set full_graph=True, then you can assign input spec.
[2024-11-26 16:04:52,071] [ INFO] - Exporting inference model to ./best_models/afqmc\width_mult_0.75\model
[2024-11-26 16:04:53,422] [ INFO] - Inference model exported.
[2024-11-26 16:04:53,422] [ INFO] - Pruned models have been exported.
[2024-11-26 16:04:53,424] [ INFO] - Pruning is finished using DynaBERT strategy.
[2024-11-26 16:04:53,424] [ INFO] - Post training quantization starts.
Traceback (most recent call last):
File "F:\aiWorkplace\PaddleNLP-develop\slm\model_zoo\ernie-3.0\compress_seq_cls.py", line 81, in
main()
File "F:\aiWorkplace\PaddleNLP-develop\slm\model_zoo\ernie-3.0\compress_seq_cls.py", line 77, in main
trainer.compress()
File "F:\anaconda\envs\PaddleNLP\Lib\site-packages\paddlenlp\trainer\trainer_compress.py", line 81, in compress
output_dir_list += self.quant(output_dir_width, "ptq")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\anaconda\envs\PaddleNLP\Lib\site-packages\paddlenlp\trainer\trainer_compress.py", line 129, in quant
return _post_training_quantization_grid_search(self, model_dir)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\anaconda\envs\PaddleNLP\Lib\site-packages\paddlenlp\trainer\trainer_compress.py", line 686, in _post_training_quantization_grid_search
_post_training_quantization(algo, batch_size, batch_nums)
File "F:\anaconda\envs\PaddleNLP\Lib\site-packages\paddlenlp\trainer\trainer_compress.py", line 649, in _post_training_quantization
post_training_quantization = PostTrainingQuantization(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\anaconda\envs\PaddleNLP\Lib\site-packages\paddle\static\quantization\post_training_quantization.py", line 312, in init
assert data_loader is not None, "data_loader cannot be None."
^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: data_loader cannot be None.
The text was updated successfully, but these errors were encountered: