Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A script to incorporate large dataset #204

Open
woensug-choi opened this issue Feb 16, 2022 · 2 comments
Open

A script to incorporate large dataset #204

woensug-choi opened this issue Feb 16, 2022 · 2 comments
Labels
enhancement New feature or request

Comments

@woensug-choi
Copy link
Collaborator

I wonder how we are going to distribute high-resolution bathymetry data.

One thing we could do is create a script that 1) downloads the dataset 2) runs the Batymetry_Converter on the dataset and 3) installs the output to a share directory at build time. Once we have such a script, we would just need to modify the CMakeLists.txt file for the worlds package such that its executes during catkin build. This has worked well for other projects where worlds needed to be auto-generated and where large 3D models needed to be made available to downstream users. Happy to discuss this further if required.

Originally posted by @Yadunund in #202 (comment)

@woensug-choi
Copy link
Collaborator Author

I thought it would be more appropriate to discuss it here on the issue page.
First of all, it would not be in a high priority but wanted to discuss and learn about it :)

Wouldn't your solution require a user to download and convert the dataset even though they are not using it? Or would it only download/convert when running a launch file related to it?

@Yadunund
Copy link
Member

Hi @woensug-choi ,

Thanks for creating this ticket to discuss the matter. 😄

You're right, with this approach, the download/conversions will happen for all the worlds. We can decouple the download and conversion into separate scripts. The downloaded datasets can be stored in ~/.gazebo or somewhere else appropriate. The downloader script should then check if the dataset already exists in the cache and only download the files if it does not. We would then only store a yaml (for example) file in the dave_worlds pkg for each world that we want to generate. This yaml file will follow a schema to specify which dataset should be used, parameters for the conversion, etc. The converter script will parse this schema and invoke the Bathymetry_Converter script accordingly. In the future we could also have a GUI to perform operations like crop or even mark spwan locations for robots. This GUI would read/write to the same yaml file.

We've adopted this workflow for a different project where we want users to be able to generate custom 3D worlds from 2D drawings which they can annotate with features/models using a GUI. Here's the reference for that: https://github.com/open-rmf/rmf_traffic_editor

Another alternative is for us to upload all the sdf models + meshes currently in dave_object_models, to the Dave fuel collection which can be automatically downloaded during build or launch.

Just throwing some ideas out there. Let me know what you think!

@woensug-choi woensug-choi added the enhancement New feature or request label Feb 21, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants