Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

[dynamic shapes] unbacked safe conv1d #154089

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 6 commits into
base: main
Choose a base branch
Loading
from

Conversation

pianpwk
Copy link
Contributor

@pianpwk pianpwk commented May 22, 2025

Copy link

pytorch-bot bot commented May 22, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/154089

Note: Links to docs will display an error until the docs builds have been completed.

❌ 1 New Failure

As of commit 3fe085e with merge base ab6cb85 (image):

NEW FAILURE - The following job has failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@pytorch-bot pytorch-bot bot added the release notes: fx release notes category label May 22, 2025
return torch.contiguous_format
elif input_tensor.is_contiguous(memory_format=torch.preserve_format):
return torch.preserve_format
Copy link
Contributor Author

@pianpwk pianpwk May 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I might be wrong, but it seems:

  • the result is used in a .to() call here:
    out = out.to(memory_format=pick_memory_format()) # type: ignore[call-overload]
  • and memory_format=None is the same as memory_format=torch.preserve_format
  • I'm not sure "contiguous according to torch.preserve_format" makes sense? This is mostly used for copying the original tensor, to say "keep the original format" - how can you be contiguous w.r.t that? There's some indications this isn't supported:
    Tensor contiguous(const Tensor& self, MemoryFormat memory_format) {
    if (self.is_contiguous(memory_format)) {
    return self;
    }
    TORCH_CHECK(
    memory_format != MemoryFormat::Preserve,
    "preserve memory format is unsupported by the contiguous operator");
    return self.clone(memory_format);
    }

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bdhirsh is that something you are familiar with?

@pianpwk pianpwk changed the title [WIP][dynamic shapes] unbacked safe conv1d [WIP][dynamic shapes] guard_or_false for conv1d May 28, 2025
@pianpwk pianpwk changed the title [WIP][dynamic shapes] guard_or_false for conv1d [dynamic shapes] unbacked safe conv1d May 28, 2025
@pianpwk pianpwk marked this pull request as ready for review May 28, 2025 17:10
@pianpwk pianpwk requested review from bobrenjc93 and laithsakka May 28, 2025 17:11
@@ -2447,10 +2450,8 @@ def pick_memory_format():
else:
if is_channels_last(input_tensor):
return torch.channels_last
if input_tensor.is_contiguous(memory_format=torch.contiguous_format):
if utils.definitely_contiguous_for_memory_format(input_tensor, memory_format=torch.contiguous_format):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we are just changing the meta function, is there a decomposition for conv that asserts a matching behaviour with the meta for unbacked?
or does the conv kernel generate the correct expected memory format anyway? if the later i wonder if its safe to do the change?

@@ -2447,10 +2450,8 @@ def pick_memory_format():
else:
if is_channels_last(input_tensor):
return torch.channels_last
if input_tensor.is_contiguous(memory_format=torch.contiguous_format):
if utils.definitely_contiguous_for_memory_format(input_tensor, memory_format=torch.contiguous_format):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do you want to return None explicitly ?

@laithsakka laithsakka requested a review from eellison May 29, 2025 17:42
Copy link
Contributor

@eellison eellison left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need the striding of convolution to be correct because we fallback to aten convolution for it. If don't guard, and we get the striding wrong, inductor output code will be incorrect.

We would need a mechanism to force the contiguous/channels last path for the convolution if we're not able to infer the meta striding. but thats not implemented here, we're just changing the meta.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
fx release notes: fx release notes category
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants
Morty Proxy This is a proxified and sanitized view of the page, visit original site.