Replies: 9 comments 8 replies
-
Thanks for the release 🎉! FYI, the link at the bottom is broken. It should probably be https://github.com/microsoft/onnxruntime/releases/tag/v1.15.0 |
Beta Was this translation helpful? Give feedback.
-
@snnn Great stuff! The packing in particular is very exciting, however I can find no information or documentation on how it works? I assume it's similar to fastertransformer? |
Beta Was this translation helpful? Give feedback.
-
@snnn thanks for the update. Very interesting. iOS 11 and below. iOS 12 will be the minimum supported version. |
Beta Was this translation helpful? Give feedback.
-
Thanks for Python 3.11 support! 🎉 |
Beta Was this translation helpful? Give feedback.
-
The onnxruntime-training builds are missing the training header files. I am able to use the library only after copying the orttraining headers from the repo. |
Beta Was this translation helpful? Give feedback.
-
Thanks for the T5 and beam search improvements! Do you have any benchmarks on expected speedup for either of these? |
Beta Was this translation helpful? Give feedback.
-
not support cuda12.1? see #16386 |
Beta Was this translation helpful? Give feedback.
-
I just published a patch release for it: https://github.com/microsoft/onnxruntime/releases/tag/v1.15.1 |
Beta Was this translation helpful? Give feedback.
-
Is there a linux aarch32 binary? |
Beta Was this translation helpful? Give feedback.
-
Announcements
Starting from the next release(ONNX Runtime 1.16.0), at operating system level we will drop the support for
At compiler level we will drop the support for
Also, we will remove the onnxruntime_DISABLE_ABSEIL build option since we will upgrade protobuf and the new protobuf version will need abseil.
General
Build System
Performance
Execution Providers
Two new execution providers: JS EP and QNN EP.
TensorRT EP
OpenVINO EP
QNN EP
DirectML EP:
AzureEP
Mobile
New packages
Pre/Post processing
Added support for built-in pre and post processing for NLP scenarios: classification, question-answering, text-prediction
Added support for built-in pre and post processing for Speech Recognition (Whisper)
Added support for built-in post processing for Object Detection (YOLO). Non-max suppression, draw bounding boxes
Additional CoreML and NNAPI kernels to support customer scenarios
Web
ORT Training
On-device training:
Others
Known Issues
Contributions
Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members:
snnn, fs-eire, edgchen1, wejoncy, mszhanyi, PeixuanZuo, pengwa, jchen351, cloudhan, tianleiwu, PatriceVignola, wangyems, adrianlizarraga, chenfucn, HectorSVC, baijumeswani, justinchuby, skottmckay, yuslepukhin, RandyShuai, RandySheriffH, natke, YUNQIUGUO, smk2007, jslhcl, chilo-ms, yufenglee, RyanUnderhill, hariharans29, zhanghuanrong, askhade, wschin, jywu-msft, mindest, zhijxu-MS, dependabot[bot], xadupre, liqunfu, nums11, gramalingam, Craigacp, fdwr, shalvamist, jstoecker, yihonglyu, sumitsays, stevenlix, iK1D, pranavsharma, georgen117, sfatimar, MaajidKhan, satyajandhyala, faxu, jcwchen, hanbitmyths, jeffbloo, souptc, ytaous kunal-vaishnavi
Beta Was this translation helpful? Give feedback.
All reactions