Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable VBE support on CPU #3174

Closed
wants to merge 2 commits into from
Closed

Conversation

spcyppt
Copy link
Contributor

@spcyppt spcyppt commented Sep 25, 2024

Summary:
Previous VBE on CPU was enabled in lookup_{{ optimizer }}.py.

To support MTIA ops, VBE should be done after torch.ops.fbgemm.{{ mdesc }}embedding_codegen_lookup{{ optimizer }}_function_pt2.

This diff follows the same implementation but enables it C++ so that it goes through the same PT2 pipeline (i.e., lookup -> VBE autograd -> cpu wrapper (do vbe here) -> cpu kernel). the call is done

Differential Revision: D63410944

Copy link

netlify bot commented Sep 25, 2024

Deploy Preview for pytorch-fbgemm-docs ready!

Name Link
🔨 Latest commit 4b88735
🔍 Latest deploy log https://app.netlify.com/sites/pytorch-fbgemm-docs/deploys/66fc4baaaa2901000864d4e7
😎 Deploy Preview https://deploy-preview-3174--pytorch-fbgemm-docs.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site configuration.

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D63410944

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D63410944

spcyppt added a commit to spcyppt/FBGEMM that referenced this pull request Sep 25, 2024
Summary:
Pull Request resolved: pytorch#3174

Previous VBE on CPU was enabled in lookup_{{ optimizer }}.py.

To support MTIA ops, VBE should be done after torch.ops.fbgemm.{{ mdesc }}_embedding_codegen_lookup_{{ optimizer }}_function_pt2.

This diff follows the same implementation but enables it C++ so that it goes through the same PT2 pipeline (i.e., lookup -> VBE autograd -> cpu wrapper (*do vbe here*) -> cpu kernel).  the call is done

Differential Revision: D63410944
spcyppt added a commit to spcyppt/FBGEMM that referenced this pull request Sep 25, 2024
Summary:
Pull Request resolved: pytorch#3174

Previous VBE on CPU was enabled in lookup_{{ optimizer }}.py.

To support MTIA ops, VBE should be done after torch.ops.fbgemm.{{ mdesc }}_embedding_codegen_lookup_{{ optimizer }}_function_pt2.

This diff follows the same implementation but enables it C++ so that it goes through the same PT2 pipeline (i.e., lookup -> VBE autograd -> cpu wrapper (*do vbe here*) -> cpu kernel).  the call is done

Differential Revision: D63410944
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D63410944

1 similar comment
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D63410944

spcyppt added a commit to spcyppt/FBGEMM that referenced this pull request Sep 25, 2024
Summary:
Pull Request resolved: pytorch#3174

Previous VBE on CPU was enabled in lookup_{{ optimizer }}.py.

To support MTIA ops, VBE should be done after torch.ops.fbgemm.{{ mdesc }}_embedding_codegen_lookup_{{ optimizer }}_function_pt2.

This diff follows the same implementation but enables it C++ so that it goes through the same PT2 pipeline (i.e., lookup -> VBE autograd -> cpu wrapper (*do vbe here*) -> cpu kernel).  the call is done

Differential Revision: D63410944
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D63410944

spcyppt added a commit to spcyppt/FBGEMM that referenced this pull request Sep 26, 2024
Summary:
Pull Request resolved: pytorch#3174

Previous VBE on CPU was enabled in lookup_{{ optimizer }}.py.

To support MTIA ops, VBE should be done after torch.ops.fbgemm.{{ mdesc }}_embedding_codegen_lookup_{{ optimizer }}_function_pt2.

This diff follows the same implementation but enables it C++ so that it goes through the same PT2 pipeline (i.e., lookup -> VBE autograd -> cpu wrapper (*do vbe here*) -> cpu kernel).  the call is done

Differential Revision: D63410944
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D63410944

spcyppt added a commit to spcyppt/FBGEMM that referenced this pull request Sep 26, 2024
Summary:
Pull Request resolved: pytorch#3174

Previous VBE on CPU was enabled in lookup_{{ optimizer }}.py.

To support MTIA ops, VBE should be done after torch.ops.fbgemm.{{ mdesc }}_embedding_codegen_lookup_{{ optimizer }}_function_pt2.

This diff follows the same implementation but enables it C++ so that it goes through the same PT2 pipeline (i.e., lookup -> VBE autograd -> cpu wrapper (*do vbe here*) -> cpu kernel).  the call is done

Differential Revision: D63410944
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D63410944

spcyppt added a commit to spcyppt/FBGEMM that referenced this pull request Sep 26, 2024
Summary:
Pull Request resolved: pytorch#3174

Previous VBE on CPU was enabled in lookup_{{ optimizer }}.py.

To support MTIA ops, VBE should be done after torch.ops.fbgemm.{{ mdesc }}_embedding_codegen_lookup_{{ optimizer }}_function_pt2.

This diff follows the same implementation but enables it C++ so that it goes through the same PT2 pipeline (i.e., lookup -> VBE autograd -> cpu wrapper (*do vbe here*) -> cpu kernel).  the call is done

Differential Revision: D63410944
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D63410944

spcyppt added a commit to spcyppt/FBGEMM that referenced this pull request Sep 26, 2024
Summary:
Pull Request resolved: pytorch#3174

Previous VBE on CPU was enabled in lookup_{{ optimizer }}.py.

To support MTIA ops, VBE should be done after torch.ops.fbgemm.{{ mdesc }}_embedding_codegen_lookup_{{ optimizer }}_function_pt2.

This diff follows the same implementation but enables it C++ so that it goes through the same PT2 pipeline (i.e., lookup -> VBE autograd -> cpu wrapper (*do vbe here*) -> cpu kernel).  the call is done

Differential Revision: D63410944
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D63410944

spcyppt added a commit to spcyppt/FBGEMM that referenced this pull request Sep 27, 2024
Summary:
Pull Request resolved: pytorch#3174

Previous VBE on CPU was enabled in lookup_{{ optimizer }}.py.

To support MTIA ops, VBE should be done after torch.ops.fbgemm.{{ mdesc }}_embedding_codegen_lookup_{{ optimizer }}_function_pt2.

This diff follows the same implementation but enables it C++ so that it goes through the same PT2 pipeline (i.e., lookup -> VBE autograd -> cpu wrapper (*do vbe here*) -> cpu kernel).  the call is done

Reviewed By: q10

Differential Revision: D63410944
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D63410944

spcyppt added a commit to spcyppt/FBGEMM that referenced this pull request Sep 27, 2024
Summary:
Pull Request resolved: pytorch#3174

Previous VBE on CPU was enabled in lookup_{{ optimizer }}.py.

To support MTIA ops, VBE should be done after torch.ops.fbgemm.{{ mdesc }}_embedding_codegen_lookup_{{ optimizer }}_function_pt2.

This diff follows the same implementation but enables it C++ so that it goes through the same PT2 pipeline (i.e., lookup -> VBE autograd -> cpu wrapper (*do vbe here*) -> cpu kernel).  the call is done

Reviewed By: q10

Differential Revision: D63410944
@spcyppt spcyppt force-pushed the export-D63410944 branch 2 times, most recently from 8c4f8f5 to b2c45b5 Compare September 27, 2024 22:49
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D63410944

spcyppt added a commit to spcyppt/FBGEMM that referenced this pull request Sep 28, 2024
Summary:
Pull Request resolved: pytorch#3174

Previous VBE on CPU was enabled in lookup_{{ optimizer }}.py.

To support MTIA ops, VBE should be done after torch.ops.fbgemm.{{ mdesc }}_embedding_codegen_lookup_{{ optimizer }}_function_pt2.

This diff follows the same implementation but enables it C++ so that it goes through the same PT2 pipeline (i.e., lookup -> VBE autograd -> cpu wrapper (*do vbe here*) -> cpu kernel).  the call is done

Reviewed By: q10

Differential Revision: D63410944
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D63410944

1 similar comment
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D63410944

spcyppt added a commit to spcyppt/FBGEMM that referenced this pull request Sep 30, 2024
Summary:
X-link: facebookresearch/FBGEMM#286

Pull Request resolved: pytorch#3174

Previous VBE on CPU was enabled in lookup_{{ optimizer }}.py.

To support MTIA ops, VBE should be done after torch.ops.fbgemm.{{ mdesc }}_embedding_codegen_lookup_{{ optimizer }}_function_pt2.

This diff follows the same implementation but enables it C++ so that it goes through the same PT2 pipeline (i.e., lookup -> VBE autograd -> cpu wrapper (*do vbe here*) -> cpu kernel).  the call is done

Reviewed By: q10

Differential Revision: D63410944
Differential Revision: D63711688
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D63410944

spcyppt added a commit to spcyppt/FBGEMM that referenced this pull request Oct 1, 2024
Summary:
X-link: facebookresearch/FBGEMM#286

Pull Request resolved: pytorch#3174

Previous VBE on CPU was enabled in lookup_{{ optimizer }}.py.

To support MTIA ops, VBE should be done after torch.ops.fbgemm.{{ mdesc }}_embedding_codegen_lookup_{{ optimizer }}_function_pt2.

This diff follows the same implementation but enables it C++ so that it goes through the same PT2 pipeline (i.e., lookup -> VBE autograd -> cpu wrapper (*do vbe here*) -> cpu kernel).  the call is done

Reviewed By: q10

Differential Revision: D63410944
Summary:
X-link: facebookresearch/FBGEMM#286

Pull Request resolved: pytorch#3174

Previous VBE on CPU was enabled in lookup_{{ optimizer }}.py.

To support MTIA ops, VBE should be done after torch.ops.fbgemm.{{ mdesc }}_embedding_codegen_lookup_{{ optimizer }}_function_pt2.

This diff follows the same implementation but enables it C++ so that it goes through the same PT2 pipeline (i.e., lookup -> VBE autograd -> cpu wrapper (*do vbe here*) -> cpu kernel).  the call is done

Reviewed By: q10

Differential Revision: D63410944
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D63410944

@facebook-github-bot
Copy link
Contributor

This pull request has been merged in f9de209.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants