Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support scenario: Input Bitmap + Training = ONNX model #7209

Open
vpenades opened this issue Aug 4, 2024 · 1 comment
Open

Support scenario: Input Bitmap + Training = ONNX model #7209

vpenades opened this issue Aug 4, 2024 · 1 comment
Labels
enhancement New feature or request onnx Exporting ONNX models or loading ONNX models untriaged New issue has not been triaged

Comments

@vpenades
Copy link

vpenades commented Aug 4, 2024

Is your feature request related to a problem? Please describe.

Right now, ML supports several scenarios like image classification and detection which take a Bitmap as an input, and after training, it produces a ML Zip file.

ML also supports training over plain data, that after training, it can be exported to an ONNX model.

But there is no scenario, or example that covers both, that is: training a model using bitmaps as input, and being able to output an onnx model.

Apparently, the main roadblock being that the ONNX Converter toolchain being limited to a few data types, which does not include MLImage.

Describe the solution you'd like

There's some solutions already proposed, for example: #5271 proposes to exclude the input data pre-processing part of the training pipeline, which happens to be the part that cannot be exported to ONNX. Ideally, the export process would begin at the point of the pipeline where the input image has been converted to a tensor.

Another solution would be to make ONNX converter toolchain to handle any incoming MLImage type as a tensor.

Yet another solution would be to introduce a new "image" type, which is lower level and more "palatable" by the ONNX converter. Theoretically, this image type would represent the images in its already pre-processed state, like scaled to a fixed size.

Finally, if this Bitmap + Training = ONNX scenario is already supported by the current libraries, it could be desirable to have an end-to-end example showcasing how to properly configure the input data and the pipeline so it can be successfully exported to ONNX. (And I've also looked for such an example in the examples repository with no success)

Describe alternatives you've considered

Not using ML at all and do the training with other frameworks.

Additional context

This is a long standing issue that has been already highlighted by issues like #6810, and I have to apologize for opening yet another one, but this problem seems to be kept unaswered for months (years?)... from time to time I come here to look for news and see if the latest version of the ML libraries finally solved this problem, just to discover it remains unanswered.

Additinally, We're using OnnxRuntime at low level for inference, so we really do require to export to ONNX, ML.Zip is not an option to us.

@vpenades vpenades added the enhancement New feature or request label Aug 4, 2024
@dotnet-policy-service dotnet-policy-service bot added the untriaged New issue has not been triaged label Aug 4, 2024
@luisquintanilla luisquintanilla added the onnx Exporting ONNX models or loading ONNX models label Aug 27, 2024
@batsword
Copy link

batsword commented Sep 3, 2024

today it is still there,no one can solve the problem,so i give up ml.net

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request onnx Exporting ONNX models or loading ONNX models untriaged New issue has not been triaged
Projects
None yet
Development

No branches or pull requests

3 participants