-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Detect a known object #5
Comments
Update (Tue 5 2020):
|
From @JHLee0513's shared results:
This is better than I expected given that these weren't explicitly trained for. The bounding boxes do look funky though, like something is wrong with NMS or they're being drawn with a dimension skewed.
|
@nickswalker Do we have any storage/GPU solution for the YCB videos dataset to be handled? I could try using one of RSE lab machines, though I'd have to confirm its availability. (since the dataset is 265G...) |
I think the update from @csemecu is that she'll check if we can use one of the VR capstone's machines as a short term solution. We should discuss more during Monday's meeting |
@nickswalker As a followup to categories, should we include both categories from COCO and YCB for the final perception system? For quickly testing out the whole pipeline I will finetune only on YCB for now as to allow inspection based on objects we have. |
Most of the COCO classes are irrelevant for us, so no need to include them. |
Update (Fed 18): got model to start training, progress was delayed due to midterm :/ I will keep updating on its training speed, if trained the inference, etc ASAP |
Based on what @JHLee0513 has shown, we seem to be well above this bar now. Future work is in making sure we can quickly train in additional classes (labeling pipeline #7) and in connecting 2D and 3D perception (like what's happening for pick and place, and eventually for receptionist #13). |
Ah, but there's no code tracked for this anywhere. @JHLee0513 open a branch please. |
branch opened here the code is currently under heavy modification (and not too familiar to integrating another repo inside as sub-module FYI) |
Let's discuss how handle packaging tomorrow |
We've put the detection python blob as a git submodule and set up a catkin package around it. The code isn't really in a usable state yet because it's unclear how to get any data out over ROS; the model is built up in pytorch, and requires Python 3. rospy is python2 only though, so we can't just open up publishers |
@nickswalker rospy in melodic seems to support python3 (not tested personally, though there are many straightforward blog/tutorials about it online), would it be possible to set up a publisher as normal if such is the case? |
Yes, as long as rospy is working should be good. Let's test that as soon as we can. We should also check that |
Input: camera image
Output: bbox detection, or sufficient information such that object centroid can be estimated
For our pick and place milestone, it doesn't matter what object we can detect (preferably it's a YCB object in the set of RoboCup items). The goal is to have a working detection pipeline that we can evaluate end-to-end with manipulation.
The text was updated successfully, but these errors were encountered: