Skip to content

Commit

Permalink
[tracer] Add config for PyTorch2.4 (#356)
Browse files Browse the repository at this point in the history
* [tracer]add PyTorch 2.4 config

* [quantizer] skip quantization for RMSNorm

* [quantizer] update quantization_support doc
  • Loading branch information
zk1998 authored Sep 12, 2024
1 parent 202aca8 commit ce644ec
Show file tree
Hide file tree
Showing 5 changed files with 1,599 additions and 1 deletion.
2 changes: 2 additions & 0 deletions docs/quantization_support.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@ Quantized OPs that are natively not supported by PyTorch (and possibly TFLite).
| `log_softmax` | / |
| `matmul` | / |
| `mm` | / |
| `norm` | / |
| `pad` | 1.7.0 |
| `pow` | / |
| `prelu` | / |
Expand All @@ -46,6 +47,7 @@ Quantized OPs that are natively not supported by PyTorch (and possibly TFLite).
| `torch.nn.LayerNorm` | / |
| `torch.nn.LogSoftmax` | / |
| `torch.nn.PReLU` | / |
| `torch.nn.RMSNorm` | / |
| `torch.nn.RNN` | / |
| `torch.nn.SiLU` | / |
| `torch.nn.Softmax` | / |
Expand Down
2 changes: 1 addition & 1 deletion tinynn/graph/configs/gen_funcs_yml.py
Original file line number Diff line number Diff line change
Expand Up @@ -160,7 +160,7 @@ def get_scope(ns):
ver = '_'.join(ver.split('.')[:2])

# Stage 7: Functions in new versions may exist in current version
latest = '2_0'
latest = '2_4'
if ver != latest:
with open(f'torch_func_override_{latest}.yml', 'r') as f:
d = yaml.load(f, yaml.SafeLoader)
Expand Down
Loading

0 comments on commit ce644ec

Please sign in to comment.