Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

[dynamo] Mark a vt unspecialized nn module variable source earlier #154780

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 15 commits into from

Conversation

anijain2305
Copy link
Contributor

@anijain2305 anijain2305 commented May 31, 2025

Copy link

pytorch-bot bot commented May 31, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/154780

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (1 Unrelated Failure)

As of commit 3285033 with merge base af9f18e (image):

UNSTABLE - The following job is marked as unstable, possibly due to flakiness on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

… earlier"


I am working on providing some skip guard helper functions to allow users to reduce guard overhead. This is a refactor to allow that.

cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 kadeng chauhang amjames

[ghstack-poisoned]
… earlier"


I am working on providing some skip guard helper functions to allow users to reduce guard overhead. This is a refactor to allow that.

cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 kadeng chauhang amjames

[ghstack-poisoned]
… earlier"


I am working on providing some skip guard helper functions to allow users to reduce guard overhead. This is a refactor to allow that.

cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 kadeng chauhang amjames

[ghstack-poisoned]
… earlier"


I am working on providing some skip guard helper functions to allow users to reduce guard overhead. This is a refactor to allow that.

cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 kadeng chauhang amjames

[ghstack-poisoned]
result = UnspecializedBuiltinNNModuleVariable(value, source=self.source)
if (
value.__module__.startswith(("torch.nn.modules", "torch.ao."))
and not value.__module__.startswith("torch.nn.modules.container")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

container is deliberately left out to be bulitin because a container, like Sequential, can consist of user defined module.

… earlier"


I am working on providing some skip guard helper functions to allow users to reduce guard overhead. This is a refactor to allow that.

cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 kadeng chauhang amjames

[ghstack-poisoned]
… earlier"


I am working on providing some skip guard helper functions to allow users to reduce guard overhead. This is a refactor to allow that.

cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 kadeng chauhang amjames

[ghstack-poisoned]
… earlier"


I am working on providing some skip guard helper functions to allow users to reduce guard overhead. This is a refactor to allow that.

cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 kadeng chauhang amjames

[ghstack-poisoned]
@anijain2305 anijain2305 requested a review from zou3519 as a code owner June 1, 2025 17:10
… earlier"


I am working on providing some skip guard helper functions to allow users to reduce guard overhead. This is a refactor to allow that.

cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 kadeng chauhang amjames

[ghstack-poisoned]
… earlier"


I am working on providing some skip guard helper functions to allow users to reduce guard overhead. This is a refactor to allow that.

cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 kadeng chauhang amjames

[ghstack-poisoned]
… earlier"


I am working on providing some skip guard helper functions to allow users to reduce guard overhead. This is a refactor to allow that.

cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 kadeng chauhang amjames

[ghstack-poisoned]
… earlier"


I am working on providing some skip guard helper functions to allow users to reduce guard overhead. This is a refactor to allow that.

cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 kadeng chauhang amjames

[ghstack-poisoned]
@anijain2305 anijain2305 added the ci-no-td Do not run TD on this PR label Jun 2, 2025
… earlier"


I am working on providing some skip guard helper functions to allow users to reduce guard overhead. This is a refactor to allow that.

cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 kadeng chauhang amjames

[ghstack-poisoned]
anijain2305 added a commit that referenced this pull request Jun 2, 2025
ghstack-source-id: f229e45
Pull Request resolved: #154780

[dynamo] Wrap unspec module with a unspec source earlier

ghstack-source-id: f229e45
Pull Request resolved: #154789
hf_BigBird,pass,18
hf_BigBird,pass,24
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this an increase in # of graph breaks?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Its actually a recompilation - I added #154867 so that users would know how to avoid the recompilation.

It happens for something like this

class Mod(torch.nn.Module):
    def __init__(self):
        super().__init__()
        self.c = 0

    def forward(self, x):
        self.c += 1
        return x * self.c

You can say this is somehow BC breaking. But there was a inconsistency earlier where sometimes we would consider nn.Module int atttibutes static and sometimes symint. This PR makes it more precise, where we always consider it static.

Why not always make it symint? I tried this and just a simple symint change here brings lots of dynamism into the model, causing crashes in a couple of examples I saw. In most cases, when users think of dynamism, they want their input tensors to be dynamic. So a general heuristic that nn module integer attributes are static by default makes more sense, with an option to override it - allow_unspec_int_on_nn_module. From code comments, this seems like the original intent as well, but there were a few bugs/inconsistencies.

cc @bobrenjc93 @zou3519

torch/_dynamo/variables/builder.py Show resolved Hide resolved
torch/_dynamo/variables/builder.py Show resolved Hide resolved
@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

pytorchmergebot pushed a commit to Eliasj42/pytorch that referenced this pull request Jun 3, 2025
…ytorch#154780)

I am working on providing some skip guard helper functions to allow users to reduce guard overhead. This is a refactor to allow that.

Pull Request resolved: pytorch#154780
Approved by: https://github.com/StrongerXi, https://github.com/jansel
pytorchmergebot pushed a commit that referenced this pull request Jun 4, 2025
…#154867)

For program like this

```
class Mod(torch.nn.Module):
    def __init__(self):
        super().__init__()
        self.c = 0

    def forward(self, x):
        self.c += 1
        return x * self.c
```

You can check the recompile reasons at https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/.tmpzv9z6Q/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000

![image](https://github.com/user-attachments/assets/856a95fd-0533-4abc-a213-1f73ae2cb766)

Pull Request resolved: #154867
Approved by: https://github.com/zou3519
ghstack dependencies: #154780
anijain2305 added a commit that referenced this pull request Jun 4, 2025
anijain2305 added a commit that referenced this pull request Jun 4, 2025
iupaikov-amd pushed a commit to ROCm/pytorch that referenced this pull request Jun 4, 2025
…ytorch#154780)

I am working on providing some skip guard helper functions to allow users to reduce guard overhead. This is a refactor to allow that.

Pull Request resolved: pytorch#154780
Approved by: https://github.com/StrongerXi, https://github.com/jansel
iupaikov-amd pushed a commit to ROCm/pytorch that referenced this pull request Jun 4, 2025
…pytorch#154867)

For program like this

```
class Mod(torch.nn.Module):
    def __init__(self):
        super().__init__()
        self.c = 0

    def forward(self, x):
        self.c += 1
        return x * self.c
```

You can check the recompile reasons at https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/.tmpzv9z6Q/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000

![image](https://github.com/user-attachments/assets/856a95fd-0533-4abc-a213-1f73ae2cb766)

Pull Request resolved: pytorch#154867
Approved by: https://github.com/zou3519
ghstack dependencies: pytorch#154780
@seemethere
Copy link
Member

@pytorchbot revert -c ghfirst -m "This fails internal testing see, https://fburl.com/diff/b0yuxk4w"

@pytorchmergebot
Copy link
Collaborator

@pytorchbot successfully started a revert job. Check the current status here.
Questions? Feedback? Please reach out to the PyTorch DevX Team

pytorchmergebot added a commit that referenced this pull request Jun 4, 2025
…tributes (#154867)"

This reverts commit 6c2f941.

Reverted #154867 on behalf of https://github.com/seemethere due to This fails internal testing see, https://fburl.com/diff/b0yuxk4w ([comment](#154780 (comment)))
pytorchmergebot added a commit that referenced this pull request Jun 4, 2025
…rlier (#154780)"

This reverts commit cc96feb.

Reverted #154780 on behalf of https://github.com/seemethere due to This fails internal testing see, https://fburl.com/diff/b0yuxk4w ([comment](#154780 (comment)))
@pytorchmergebot
Copy link
Collaborator

@anijain2305 your PR has been successfully reverted.

@anijain2305
Copy link
Contributor Author

Relanding at #155099

@anijain2305 anijain2305 closed this Jun 4, 2025
facebook-github-bot pushed a commit to pytorch/benchmark that referenced this pull request Jun 4, 2025
Summary:
Reland of pytorch/pytorch#154780

Fixes #ISSUE_NUMBER

cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 kadeng chauhang amjames

X-link: pytorch/pytorch#155099

Reviewed By: williamwen42

Differential Revision: D75928791

Pulled By: anijain2305

fbshipit-source-id: 55f8a214e3b0ff573dd1bdd1fb5fb34a220a620d
pytorchmergebot pushed a commit that referenced this pull request Jun 4, 2025
angelayi pushed a commit to angelayi/pytorch that referenced this pull request Jun 5, 2025
…ytorch#154780)

I am working on providing some skip guard helper functions to allow users to reduce guard overhead. This is a refactor to allow that.

Pull Request resolved: pytorch#154780
Approved by: https://github.com/StrongerXi, https://github.com/jansel
angelayi pushed a commit to angelayi/pytorch that referenced this pull request Jun 5, 2025
…pytorch#154867)

For program like this

```
class Mod(torch.nn.Module):
    def __init__(self):
        super().__init__()
        self.c = 0

    def forward(self, x):
        self.c += 1
        return x * self.c
```

You can check the recompile reasons at https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/.tmpzv9z6Q/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000

![image](https://github.com/user-attachments/assets/856a95fd-0533-4abc-a213-1f73ae2cb766)

Pull Request resolved: pytorch#154867
Approved by: https://github.com/zou3519
ghstack dependencies: pytorch#154780
angelayi pushed a commit to angelayi/pytorch that referenced this pull request Jun 5, 2025
angelayi pushed a commit to angelayi/pytorch that referenced this pull request Jun 5, 2025
angelayi pushed a commit to angelayi/pytorch that referenced this pull request Jun 5, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ci-no-td Do not run TD on this PR ciflow/inductor ciflow/trunk Trigger trunk jobs on your pull request Merged module: dynamo Reverted topic: not user facing topic category
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants
Morty Proxy This is a proxified and sanitized view of the page, visit original site.