Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Detection ground truths don't match the coco17 validation ground truths #10

Open
Britefury opened this issue Jul 25, 2018 · 4 comments
Open

Comments

@Britefury
Copy link
Contributor

Britefury commented Jul 25, 2018

Hi,

I have been taking a look at the ground truths that you've provided for the detection competition. I have noticed that the GT boxes in the val_ground_truth.pkl do not match those in coco17-val.txt; it seems that they have been scaled. A consistent x,y scale factor is used for each image, but I am unable to determine the pattern/algorithm used to compute the scale factors. Would it be possible to replace val_ground_truth.pkl with a file generated by direct conversion?

I have created a pull request (#11) that adds a conversion script (in case you're interested) and also changes the detection README slight in order to clarify the first problem that I had concerning format :)

@Britefury
Copy link
Contributor Author

Britefury commented Jul 25, 2018

Further investigation leads me to conclude that most of the Coco image ground truths were scaled to a resolution of 300x300, with some being scaled to either 300x150 or 150x300.
I can get around this by computing the per-image scale factor and applying it my predictions, but this won't work at the test stage since I wont have the ground truths from which to compute the scale factors.

@Britefury Britefury changed the title Detection ground truths scaled Detection ground truths don't match the coco17 validation ground truths Jul 26, 2018
@MInner
Copy link
Collaborator

MInner commented Jul 26, 2018

Thanks a lot for this finding and the pull request! Let me take a closer look. That is weird because I remember specifically addressing this issue at some point.

@Britefury
Copy link
Contributor Author

Britefury commented Jul 27, 2018

Okay, I've managed to make a submission to CodaLab and achieve a score that beats simple supervised training.

In order to do this though, I had to compare val_ground_truth.pkl in the repo and coco17-val.txt, compute the per-image bounding box scale factor and apply them to the predictions from my algorithm before submitting them, otherwise I get a mAP score of around 0.1% / 0.001.

@Britefury
Copy link
Contributor Author

I've added a script to the PR that compares the GT boxes in coco17-val.txt with those in val_ground_truth.pkl and prints the per-image scale factor. If you run it, you will notice that a scale factor != 1 is applied to the boxes for every image.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants