Replies: 2 comments 4 replies
-
If you think that DirectML is slow then you haven't seen the onyx iteration. It's a massive improvement and I hope that it gets even better with future directml updates. |
Beta Was this translation helpful? Give feedback.
4 replies
-
I found this version to be quicker than Shark on Windows with a 7900 XTX, that said memory on shark is better - but oddly results worse (which I wasn't expecting), so kudos to Ishqqytiger for doing this - without this I'd be £s worse off having to go for the nvidia. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
The WebUI's speed for me is 4 s/it and it's extremely slow. SHARK by nod AI is very fast for AMD users, but it's very limited in terms of features. Direct ML is extremely slow, and I think torch-mlir would be amazing to implement. Please see to it, us AMD users are very left out.
Beta Was this translation helpful? Give feedback.
All reactions