-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Change compile for pipeline module torch.compile #6478
base: master
Are you sure you want to change the base?
Change compile for pipeline module torch.compile #6478
Conversation
Hi @NirSonnenschein, thank you for the great catch! Can you also add a small test case? We want to make sure that this change works for various settings. |
Hi @NirSonnenschein - following up on this ask? |
Hi @loadams , |
No problem, thanks for the update |
Hi @loadams |
We have encountered and issue with torch.compile and the pipeline module. modifying a member of the module duing the run will cause torch compile to restart the analysis and treat the module as dynamic. this happens because the fwd function will modify the micro_offset attribute of the pipeline module. in order to bypass this issue without significantly changing the way the pipeline module works we propose to compile only the layers in the pipeline module instead of the pipeline module itslef. this will bypass the issue, and should still give most of the benefit of torch compiling the pipeline module while avoiding the issue.
808aa45
to
441a328
Compare
running torch compile with daemonic threads will cause an error due to the inductor implementation which can spawn processes
added fix for test: tests that use torch.compile and run using daemonic process will result in an error on gpu due to the inductor trying to spawn a process. |
We have encountered and issue with torch.compile and the pipeline module.
modifying a member of the module (micro_offset) during the forward function will cause torch compile to restart the analysis and treat the module as dynamic.
In order to bypass this issue without significantly changing the way the pipeline module works we propose to compile only the layers in the pipeline module instead of the forward function of pipeline module. this will bypass the issue and should still give most of the benefit of torch compiling the pipeline module while avoiding the issue.