Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable VBE support on CPU #3174

Closed
wants to merge 2 commits into from
Closed

Commits on Oct 1, 2024

  1. Add split_embeddings_utils_cpu

    Differential Revision: D63711688
    Supadchaya Puangpontip authored and facebook-github-bot committed Oct 1, 2024
    Configuration menu
    Copy the full SHA
    925d514 View commit details
    Browse the repository at this point in the history
  2. Enable VBE support on CPU (pytorch#3174)

    Summary:
    X-link: facebookresearch/FBGEMM#286
    
    Pull Request resolved: pytorch#3174
    
    Previous VBE on CPU was enabled in lookup_{{ optimizer }}.py.
    
    To support MTIA ops, VBE should be done after torch.ops.fbgemm.{{ mdesc }}_embedding_codegen_lookup_{{ optimizer }}_function_pt2.
    
    This diff follows the same implementation but enables it C++ so that it goes through the same PT2 pipeline (i.e., lookup -> VBE autograd -> cpu wrapper (*do vbe here*) -> cpu kernel).  the call is done
    
    Reviewed By: q10
    
    Differential Revision: D63410944
    spcyppt authored and facebook-github-bot committed Oct 1, 2024
    Configuration menu
    Copy the full SHA
    4b88735 View commit details
    Browse the repository at this point in the history