Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Regarding the calculation of FLOPs after model compression #93

Open
mjw123bs opened this issue Feb 27, 2024 · 1 comment
Open

Regarding the calculation of FLOPs after model compression #93

mjw123bs opened this issue Feb 27, 2024 · 1 comment

Comments

@mjw123bs
Copy link

Hello, when I used the FLOPs calculation method in the link https://github.com/Eric-mingjie/rethinking-network-pruning/blob/master/imagenet/l1-norm-pruning/compute_flops.py you sent to calculate the FLOPs of the compressed model, the following error was reported:
Traceback (most recent call last): File "denseprune.py", line 287, in <module> print(str(count_model_param_flops(newmodel))+"\n") File "E:\PycharmProjects\network-slimming-master(modified)\compute_flops.py", line 112, in count_model_param_flops out = model(input) File "D:\Application\anaconda3\envs\learn_pytorch\lib\site-packages\torch\nn\modules\module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "E:\PycharmProjects\network-slimming-master(modified)\models\densenet.py", line 130, in forward File "D:\Application\anaconda3\envs\learn_pytorch\lib\site-packages\torch\nn\modules\module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "D:\Application\anaconda3\envs\learn_pytorch\lib\site-packages\torch\nn\modules\container.py", line 139, in forward input = module(input) File "D:\Application\anaconda3\envs\learn_pytorch\lib\site-packages\torch\nn\modules\module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "E:\PycharmProjects\network-slimming-master(modified)\models\densenet.py", line 30, in forward File "D:\Application\anaconda3\envs\learn_pytorch\lib\site-packages\torch\nn\modules\module.py", line 1071, in _call_impl result = forward_call(*input, **kwargs) File "D:\Application\anaconda3\envs\learn_pytorch\lib\site-packages\torch\nn\modules\conv.py", line 443, in forward return self._conv_forward(input, self.weight, self.bias) File "D:\Application\anaconda3\envs\learn_pytorch\lib\site-packages\torch\nn\modules\conv.py", line 439, in _conv_forward return F.conv2d(input, weight, bias, self.stride, RuntimeError: Given groups=1, weight of size [12, 15, 3, 3], expected input[1, 24, 227, 227] to have 15 channels, but got 24 channels instead
Please tell me how to solve it? @Eric-mingjie

@mjw123bs
Copy link
Author

The above is only for the resnet and densenet models. After the model is compressed, only the input size of Conv2d is changed, but the size of BatchNorm2d is not changed, and the size does not correspond, so an error is reported, as shown in the figure. But I saw that you wrote If the next layer is the channel selection layer, then the current batch normalization layer won't be pruned. in the pruning files of the two models, so please see how to solve this problem.
微信图片_20240227160703
微信图片_20240227160857

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant