You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm not sure if this is the right place to report issues with https://github.com/ppwwyyxx/cocoapi -- that repo doesn't have its own Issues tab, so I'm opening an issue here instead.
I'm confused by how pycocotools calculates average precision and recall metrics reported in the summary. I'm not sure if it's actually a bug, or if I'm just fundamentally misunderstanding how these calculations are being done under the hood. So, I wrote out a super simple test case, just taking two bboxes with perfect overlap and passing them into COCOeval:
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.252
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.252
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.252
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.252
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.500
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.500
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.500
I believe these are considered "large", and the summary shows AP=0.252 and AR=0.500. These numbers do not make sense to me. Actual and predicted are 100% identical here, so we'd expect average precision and recall to both be 1.0, right? Am I misunderstanding something, or is there a bug in how these metrics are calculated?
The text was updated successfully, but these errors were encountered:
I'm not sure if this is the right place to report issues with https://github.com/ppwwyyxx/cocoapi -- that repo doesn't have its own Issues tab, so I'm opening an issue here instead.
I'm confused by how pycocotools calculates average precision and recall metrics reported in the summary. I'm not sure if it's actually a bug, or if I'm just fundamentally misunderstanding how these calculations are being done under the hood. So, I wrote out a super simple test case, just taking two bboxes with perfect overlap and passing them into COCOeval:
Here is the output:
I believe these are considered "large", and the summary shows AP=0.252 and AR=0.500. These numbers do not make sense to me. Actual and predicted are 100% identical here, so we'd expect average precision and recall to both be 1.0, right? Am I misunderstanding something, or is there a bug in how these metrics are calculated?
The text was updated successfully, but these errors were encountered: