-
Hello,
Using this configuration, the prediction takes about 120 seconds for a number of 120 slices. If I slice the pictures using sahi and pass the folder containing the images to predict, the whole prediction process of all 120 images just takes 12(!) seconds instead of 120. This time consumption of 12s would be optimal for my use-case. Is anyone aware of the possible issue and if so, knows a solution to it, I would be happy to hear about it. |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 7 replies
-
Hello, Then, SAHI will perform a prediction on each of the images. For me, as seen in the below image, the whole prediction which beforehand took ~300sec now took place in 10.992 seconds and 8.34 it/s resulting in ~120 ms average image prediction time. |
Beta Was this translation helpful? Give feedback.
-
Segmentation mask support for yolov8, mmdetection, detectron2, torchvision models is live with the latest release! @TTkgl https://github.com/obss/sahi/discussions/1051 There is also 5x memory and speed improvement when working with masks! Check the demo notebook on how it works: https://github.com/obss/sahi/blob/main/demo/inference_for_detectron2.ipynb |
Beta Was this translation helpful? Give feedback.
Segmentation mask support for yolov8, mmdetection, detectron2, torchvision models is live with the latest release! @TTkgl https://github.com/obss/sahi/discussions/1051
There is also 5x memory and speed improvement when working with masks!
Check the demo notebook on how it works: https://github.com/obss/sahi/blob/main/demo/inference_for_detectron2.ipynb