-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Port to VORC #19
Port to VORC #19
Conversation
Signed-off-by: Mabel Zhang <[email protected]>
Signed-off-by: Mabel Zhang <[email protected]>
Signed-off-by: Mabel Zhang <[email protected]>
Signed-off-by: Mabel Zhang <[email protected]>
Minor issue with the ghostship example solution. On termination, I'm getting a Traceback:
For reference, with
Probably just some termination cleanup issue. Could you look into it? Not a big problem but it looks cleaner. |
Signed-off-by: Mabel Zhang <[email protected]>
I created a meta-ticket #20 tracking the followup issues. The "VORC Essentials" anyone can do. It's really just playing around with the world and figuring out where to put buoys etc. Fun task. I'm happy to hand it off to someone else. The "Infrastructure" items I'd really like to get fixed. It will make my life easier and the code more rigorous (which currently bothers me). |
@mabelzhang It must be that my node doesn't handle shutdown well and I couldn't reproduce the output, so I simplified to a bash script publishing on the cora thrust command topic. I'll check out the VORC essentials and make a note of which ones I'm handling. |
Do you think it's worthwhile to commit the example solution to the repo as well? While I was testing, I wondered a few times what is the content of example_team and wanted to just change the topic names to VORC ones, but I had no access to any code. I think it'd be helpful. For the shutdown handling it'd be helpful as well to use as reference. |
I took a look at both of the example_team* content (you can view it while the container is running with I'm still curious how to properly handle shutdown scenarios and if they were handled with vrx or if the traceback occurred for each team (that presumably didn't know how to handle it). I'll see if I can figure it out going ahead because I think it would be helpful. |
Looks like there's is a
|
Ah, ok, I'll give that a try right now. Thanks! |
Added the try/except. The output looks nicer on mine, let me know if it makes a difference for you (it's uploaded). EDIT: Trying it with the vorc-docker branch now |
So! I ran it with the
so you're suggestion looks like a good way forward for shutdown handling. On a side note, I really struggled getting everything working with my Docker environment. I would receive the following error:
This error is covered in maxking/docker-mailman#85 and I believe occurs when the Docker networks persist even after the containers are killed. |
Thanks for trying it out! I tried out the new ghostship solution, and the output is clean now. The Hmm I've seen that Normally, in If that network message still happens intermittently, maybe something else still needed to be updated for VORC, but I'm not finding any other vrx things remaining... |
It's possible then that I was running |
Yeah they are using the exact same IP and subnet mask, only the name is different. So that could have been it. |
Whoops! Thanks for straightening that out for me :) |
Update about seg fault: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I manage to run all three individual tasks. Gazebo was terminated at the end of the run and I got scores in all of them. It looks good to me.
Could that segfault occur while trying to shutdown Gazebo? This is an old issue that happens sometimes in Gazebo. In any case, it doesn't seem to affect. |
Signed-off-by: Mabel Zhang <[email protected]>
Maybe? The weird thing is that the segmentation fault printout is not at the very end, but near the beginning or in the middle, before the rest of the scoring plugin printouts. Though it could be a difference in when things are flushed. |
Signed-off-by: Mabel Zhang <[email protected]>
@crvogt How much work is it to create a second solution Docker image, just so that we have more than one team to test the multi-scripts? It could just be something trivial again, perhaps the boat moving backwards, i.e. "ghostship is back"...... or something more creative. (I've deleted |
I'm going to merge this now since it's been approved, so that we have a base to run the competition. Additions and fixes can be in followup PRs. I know we have at least 2 PRs coming up. |
Should only take a few minutes! (new employee orientation on Friday so didn't get a chance to implement). "pihstsohg"? :D |
@mabelzhang Added a new team. |
Thanks! It's working for me. I'll open a new PR and add you as reviewer. |
Ahaha, it's been anglicized :D |
Dependent on osrf/vrx#228 and osrf/vorc#30.
The repository has been adapted to VORC, to live in a new branch.
All the individual scripts and multi-scripts ran.
Things in this PR that are different from the main branch:
--windowid
, so the evaluator does not need to manually tweak x y width height forrecordmydesktop
prepare_team.bash
still generates an empty file with the team name, because one of the scripts looks at the file names to determine the list of teams.task_config
YAML files added new parameters for VORC (dependent on the PR invrx
).Note 1: Only trial
0
for each task is updated with coordinates for VORC. I haven’t had time to customize subsequent trials.Note 2: Gymkhana will need a new YAML file.
generated/
, since we don’t have permanent example files yet. Once we do, we can add them back, and probably add the directory to.gitignore
, so our local files don’t continuously get committedIssues
There may still be intermittent seg faults. If you see them, please let me know when it happens... It sometimes happens, but hasn't in my last runs, so... I don't know if they're fixed or not.
After realizing I can set
gui:=true
in the server Docker (duh) to debug visually, I saw that the boat was actually in the world, contrary to what I saw in the GUI that the video recording script had spun up, which actually requires a workspace to also exist on the host machine. That had a number of things broken because I don’t usually develop on my host machine.(That itself is a huge problem, because it leads to inconsistencies between what is run in the actual competition server Dockerfile, and what is being recorded in the video - in some arbitrary environment on some evaluator’s own host machine, which could be very different from the reference environment in the server Docker. The whole point of having a Dockerfile is to have everything consistent, and videos should really be recorded from a window in Docker, as opposed to from the host machine. That really needs to be fixed.)
Other than that, there are a number of things that need to be more rigorous and follow good practices. I’m going to open followup issues for them.
Once those issues are cleaned up, similar to the video problem, things will be less error-prone, and there will be a lot less hair to pull.
To test
Follow the README :) Or this shorter version below.
First, I recommend going into
vorc_server/vorc-server/run_vorc_trial.sh
, and settinggui:=true
in theroslaunch vorc_gazebo evaluation.launch
line.This will help the reviewer (and help me) know that the competition run really works for everyone.
Then, build the server Docker (
-n
for NVIDIA):Single scripts:
In the trials, please zoom out in the Gazebo GUI (build Dockerfile with
gui:=true
, see above), make sure the marina shows up, the robot and task objects show up, and everything looks normal.Currently, only trial 0 objects are customized to VORC world coordinates. You can try trial 1+, but things probably won’t look right.
With the
ghostship
solution specific to VORC, when the task starts, you should see the robot moving forward.With the
example_team
andexample_team_2
solutions specific to VRX (we will remove once we have more examples), nothing will happen, but things should still run.Batch scripts:
Note that example_team and example_team2 won’t be able to move CoRa, since they’re set up to send commands to WAM-V topics.