You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I’m looking to benchmark cloud-available GPUs across different platforms (e.g., AWS, GCP) for common use cases like machine learning, deep learning inference etc. I’m considering using Cutlass for this purpose, but I’m unsure about its suitability for benchmarking in these environments.
Is Cutlass a suitable tool for general benchmarking tasks, especially for cloud-based GPUs (e.g., comparing AWS A10 GPUs with GCP A10 GPUs or different types of GPUs in different instance types, like to see how an L4 GPU performs in different AWS instances, considering thermal/power/bandwidth limitations)?
What specific tests and parameters would you recommend running to generate meaningful and relevant comparisons for the most common GPU use cases?
Are there any pre-defined configurations in Cutlass that would work for this type of benchmarking, or would it be necessary to define custom benchmarks?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hello,
I’m looking to benchmark cloud-available GPUs across different platforms (e.g., AWS, GCP) for common use cases like machine learning, deep learning inference etc. I’m considering using Cutlass for this purpose, but I’m unsure about its suitability for benchmarking in these environments.
Thank you in advance for your help!
Beta Was this translation helpful? Give feedback.
All reactions