Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

does M2 work with ONNX? #16

Open
andersonbcdefg opened this issue Jan 12, 2024 · 3 comments
Open

does M2 work with ONNX? #16

andersonbcdefg opened this issue Jan 12, 2024 · 3 comments

Comments

@andersonbcdefg
Copy link

i know there are special long convolutions, not sure if ONNX supports them / what would happen if I tried to export to ONNX

@DanFu09
Copy link
Collaborator

DanFu09 commented Jan 12, 2024

I'm not very familiar with ONNX - what you would need for the long convolution to do it efficiently is an FFT operation.

Out of curiosity, can you describe the use cases that need ONNX?

@andersonbcdefg
Copy link
Author

andersonbcdefg commented Jan 15, 2024

Sorry Dan I just saw this! Typical use cases for ONNX (although it does/can support GPUs) is edge, on-device CPU inference, which is especially desirable with embeddings, since most models for embeddings are small enough to run locally or even in the browser. A great example of this is Transformers.js, which I believe is actually the main reason people started putting ONNX checkpoints on the HuggingFace Hub.

ONNX is nice because you can run models exported by many frameworks (PyTorch, Tensorflow, etc.) on a single runtime, and there are some graph optimizations and quantization stuff that it makes really easy. Out of all the frameworks for compiling or exporting models to run fast locally (Llama.cpp, MLC, etc.), ONNX is the most mature and easiest to use, esp. for embeddings which haven't been as much of a focus of some of the newer libraries. It's been around long before the rise of LLMs. :) (Well... "long before" == 2017)

@CoGian
Copy link

CoGian commented Jan 23, 2024

i know there are special long convolutions, not sure if ONNX supports them / what would happen if I tried to export to ONNX

I have tried to convert M2-BERT to ONNX but unfortunately, an issue with an unsupported operator came up:

torch.onnx.errors.UnsupportedOperatorError: Exporting the operator 'aten::fft_rfft' to ONNX opset version 19 is not supported. Please feel free to request support or submit a pull request on PyTorch GitHub: https://github.com/pytorch/pytorch/issues.

I think it is related to this issue

I have used onnxruntime 1.16.3, ONNX opset version 19 and torch 2.1.0.

Maybe I will have some time to deep dive into this in the future, so if anyone else wants to try, I could share my code.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants