Guides on using custom docker images and GPU #2485
-
I’m migrating our rendering engine in https://motionbox.io from lambda (which is cpu based) to having more control (gpu based functions) I built a custom Docker image with nvidia/cuda/Ubuntu I have a node-pool in my GKE cluster that has GPUs and the drivers installed My goal is to deploy an image to work with fission, nvidia, and node environment. Then each trigger boots up a new gpu function is this possible? Maybe I’m missing docs, I’m also new to Kubernetes |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 3 replies
-
I know this is old but you should be able to use the runtimeClassName and the nodeSelector in the container pod spec |
Beta Was this translation helpful? Give feedback.
-
We have written a blog which briefly documents how to run GPU based functions. https://fission.io/blog/running-gpu-based-functions-on-fission/ |
Beta Was this translation helpful? Give feedback.
We have written a blog which briefly documents how to run GPU based functions.
https://fission.io/blog/running-gpu-based-functions-on-fission/