You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To create an infrastructure for this benchmarking repository so that it is portable and extendable to new test cases and new simulators, these are the necessary items to implement:
The above should first be done and documented for the boxes case.
When the infrastructure above is working for the boxes, new test cases can be added to include e.g. joints and contacts. One example is a tri-ball, 3 balls connected rigidly by bars.
An accuracy metric would need to be determined for each new test.
Extend to other simulators. World files should be created for other simulators based on the documented world definitions. Then tests should be created for other simulators.
The text was updated successfully, but these errors were encountered:
Hi @mabelzhang, I was looking into migration and logging for the new Gazebo and had a few questions about implementation.
The new link API is missing some of the functions for example GetWorldEnergy. I think these are on the roadmap for new Gazebo and will be implemented in future. However, is it a good idea to backport these functions from classic as part of the project? This might be helpful if we plan to add new worlds.
Also, I was thinking of using Gazebo's logging functionality to log link states. So, by default, the poses published by pose_publisher are logged into the .log files. We could write a plugin similar to pose_publisher that publishes link states (poses, velocity, energy, and momentum) to a specific topic and add this topic to the recorder for logging. Then convert this to a CSV file for metric calculation.
Or
Simply publish and log only poses and velocities and then calculate energy and momentum in postprocessing scripts as other simulators might not have the functionality for getting and logging link energy and momentum directly. I think this would be a better approach for easy integration with other simulators with minimal effort. What is your opinion on this?
To create an infrastructure for this benchmarking repository so that it is portable and extendable to new test cases and new simulators, these are the necessary items to implement:
The above should first be done and documented for the boxes case.
An accuracy metric would need to be determined for each new test.
The text was updated successfully, but these errors were encountered: