-
Notifications
You must be signed in to change notification settings - Fork 447
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feat] Add torch.compile support #1791
base: main
Are you sure you want to change the base?
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #1791 +/- ##
==========================================
+ Coverage 96.57% 96.59% +0.01%
==========================================
Files 165 165
Lines 7892 7929 +37
==========================================
+ Hits 7622 7659 +37
Misses 270 270
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚨 Try these New Features:
|
@odulcy-mindee This slows down our pytorch test CI extremly (before ~9min | now ~20min) because the compilation takes some time :/ |
@@ -195,11 +195,16 @@ def forward( | |||
out["out_map"] = prob_map | |||
|
|||
if target is None or return_preds: | |||
# Disable for torch.compile compatibility | |||
@torch.compiler.disable # type: ignore[attr-defined] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
post proc can't be compiled so model runs fully compiled and all the other parts keept as is
Then i think rdy to review :) I would say we shouldn't add a benchmark to the docs here because this is really user hardware depending or how do you see it ? At the end the Onnx way is still to prefer ... compile brings ~5-10% boost ..Onnx boosts by +50% (on CPU) ^^ |
f4e8545
to
b16b9ad
Compare
@odulcy-mindee Anything to add ? Otherwise ready to review ^^ |
This PR:
torch.compile
basic support (different backends / fullgraph is up to the user - we support only the basic compile compatibility withinductor
(default) backend)Any feedback is welcome 🤗
Closes: #1684 #1690