-
Notifications
You must be signed in to change notification settings - Fork 24.4k
[dynamo] Mark a vt unspecialized nn module variable source earlier #154780
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
[ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/154780
Note: Links to docs will display an error until the docs builds have been completed. ✅ You can merge normally! (1 Unrelated Failure)As of commit 3285033 with merge base af9f18e ( UNSTABLE - The following job is marked as unstable, possibly due to flakiness on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
… earlier" I am working on providing some skip guard helper functions to allow users to reduce guard overhead. This is a refactor to allow that. cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 kadeng chauhang amjames [ghstack-poisoned]
… earlier" I am working on providing some skip guard helper functions to allow users to reduce guard overhead. This is a refactor to allow that. cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 kadeng chauhang amjames [ghstack-poisoned]
… earlier" I am working on providing some skip guard helper functions to allow users to reduce guard overhead. This is a refactor to allow that. cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 kadeng chauhang amjames [ghstack-poisoned]
… earlier" I am working on providing some skip guard helper functions to allow users to reduce guard overhead. This is a refactor to allow that. cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 kadeng chauhang amjames [ghstack-poisoned]
result = UnspecializedBuiltinNNModuleVariable(value, source=self.source) | ||
if ( | ||
value.__module__.startswith(("torch.nn.modules", "torch.ao.")) | ||
and not value.__module__.startswith("torch.nn.modules.container") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
container
is deliberately left out to be bulitin because a container, like Sequential, can consist of user defined module.
… earlier" I am working on providing some skip guard helper functions to allow users to reduce guard overhead. This is a refactor to allow that. cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 kadeng chauhang amjames [ghstack-poisoned]
… earlier" I am working on providing some skip guard helper functions to allow users to reduce guard overhead. This is a refactor to allow that. cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 kadeng chauhang amjames [ghstack-poisoned]
… earlier" I am working on providing some skip guard helper functions to allow users to reduce guard overhead. This is a refactor to allow that. cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 kadeng chauhang amjames [ghstack-poisoned]
… earlier" I am working on providing some skip guard helper functions to allow users to reduce guard overhead. This is a refactor to allow that. cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 kadeng chauhang amjames [ghstack-poisoned]
… earlier" I am working on providing some skip guard helper functions to allow users to reduce guard overhead. This is a refactor to allow that. cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 kadeng chauhang amjames [ghstack-poisoned]
… earlier" I am working on providing some skip guard helper functions to allow users to reduce guard overhead. This is a refactor to allow that. cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 kadeng chauhang amjames [ghstack-poisoned]
… earlier" I am working on providing some skip guard helper functions to allow users to reduce guard overhead. This is a refactor to allow that. cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 kadeng chauhang amjames [ghstack-poisoned]
… earlier" I am working on providing some skip guard helper functions to allow users to reduce guard overhead. This is a refactor to allow that. cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 kadeng chauhang amjames [ghstack-poisoned]
hf_BigBird,pass,18 | ||
hf_BigBird,pass,24 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this an increase in # of graph breaks?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Its actually a recompilation - I added #154867 so that users would know how to avoid the recompilation.
It happens for something like this
class Mod(torch.nn.Module):
def __init__(self):
super().__init__()
self.c = 0
def forward(self, x):
self.c += 1
return x * self.c
You can say this is somehow BC breaking. But there was a inconsistency earlier where sometimes we would consider nn.Module int atttibutes static and sometimes symint. This PR makes it more precise, where we always consider it static.
Why not always make it symint? I tried this and just a simple symint change here brings lots of dynamism into the model, causing crashes in a couple of examples I saw. In most cases, when users think of dynamism, they want their input tensors to be dynamic. So a general heuristic that nn module integer attributes are static by default makes more sense, with an option to override it - allow_unspec_int_on_nn_module
. From code comments, this seems like the original intent as well, but there were a few bugs/inconsistencies.
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
…ytorch#154780) I am working on providing some skip guard helper functions to allow users to reduce guard overhead. This is a refactor to allow that. Pull Request resolved: pytorch#154780 Approved by: https://github.com/StrongerXi, https://github.com/jansel
…#154867) For program like this ``` class Mod(torch.nn.Module): def __init__(self): super().__init__() self.c = 0 def forward(self, x): self.c += 1 return x * self.c ``` You can check the recompile reasons at https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/.tmpzv9z6Q/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000  Pull Request resolved: #154867 Approved by: https://github.com/zou3519 ghstack dependencies: #154780
…ytorch#154780) I am working on providing some skip guard helper functions to allow users to reduce guard overhead. This is a refactor to allow that. Pull Request resolved: pytorch#154780 Approved by: https://github.com/StrongerXi, https://github.com/jansel
…pytorch#154867) For program like this ``` class Mod(torch.nn.Module): def __init__(self): super().__init__() self.c = 0 def forward(self, x): self.c += 1 return x * self.c ``` You can check the recompile reasons at https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/.tmpzv9z6Q/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000  Pull Request resolved: pytorch#154867 Approved by: https://github.com/zou3519 ghstack dependencies: pytorch#154780
@pytorchbot revert -c ghfirst -m "This fails internal testing see, https://fburl.com/diff/b0yuxk4w" |
@pytorchbot successfully started a revert job. Check the current status here. |
…tributes (#154867)" This reverts commit 6c2f941. Reverted #154867 on behalf of https://github.com/seemethere due to This fails internal testing see, https://fburl.com/diff/b0yuxk4w ([comment](#154780 (comment)))
…rlier (#154780)" This reverts commit cc96feb. Reverted #154780 on behalf of https://github.com/seemethere due to This fails internal testing see, https://fburl.com/diff/b0yuxk4w ([comment](#154780 (comment)))
@anijain2305 your PR has been successfully reverted. |
Relanding at #155099 |
Summary: Reland of pytorch/pytorch#154780 Fixes #ISSUE_NUMBER cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 kadeng chauhang amjames X-link: pytorch/pytorch#155099 Reviewed By: williamwen42 Differential Revision: D75928791 Pulled By: anijain2305 fbshipit-source-id: 55f8a214e3b0ff573dd1bdd1fb5fb34a220a620d
…rlier (#155099) Reland of #154780 Fixes #ISSUE_NUMBER Pull Request resolved: #155099 Approved by: https://github.com/williamwen42
…ytorch#154780) I am working on providing some skip guard helper functions to allow users to reduce guard overhead. This is a refactor to allow that. Pull Request resolved: pytorch#154780 Approved by: https://github.com/StrongerXi, https://github.com/jansel
…pytorch#154867) For program like this ``` class Mod(torch.nn.Module): def __init__(self): super().__init__() self.c = 0 def forward(self, x): self.c += 1 return x * self.c ``` You can check the recompile reasons at https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/.tmpzv9z6Q/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000  Pull Request resolved: pytorch#154867 Approved by: https://github.com/zou3519 ghstack dependencies: pytorch#154780
…tributes (pytorch#154867)" This reverts commit 6c2f941. Reverted pytorch#154867 on behalf of https://github.com/seemethere due to This fails internal testing see, https://fburl.com/diff/b0yuxk4w ([comment](pytorch#154780 (comment)))
…rlier (pytorch#154780)" This reverts commit cc96feb. Reverted pytorch#154780 on behalf of https://github.com/seemethere due to This fails internal testing see, https://fburl.com/diff/b0yuxk4w ([comment](pytorch#154780 (comment)))
…rlier (pytorch#155099) Reland of pytorch#154780 Fixes #ISSUE_NUMBER Pull Request resolved: pytorch#155099 Approved by: https://github.com/williamwen42
Stack from ghstack (oldest at bottom):
I am working on providing some skip guard helper functions to allow users to reduce guard overhead. This is a refactor to allow that.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames