Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sim-to-real deployment help/questions #12

Open
aeites opened this issue Jun 26, 2024 · 1 comment
Open

Sim-to-real deployment help/questions #12

aeites opened this issue Jun 26, 2024 · 1 comment

Comments

@aeites
Copy link

aeites commented Jun 26, 2024

Hi, I had a few questions about the sim-to-real deployment for the globe_walking task.

We deployed the globe_walking/runs/globe_walking/dr_eureka_best in the real world with a Unitree Go1 Edu on a 85cm yoga ball and I was unable to reproduce the results shown in the video.

During deployment of python3 deploy_policy.py there is a calibration phase and then an execution phase, and we had a few questions about what happens during the calibration phase:

About to calibrate; the robot will stand [Press R2 to calibrate]
frq: 0.16061135416996541 Hz
Starting pose calibrated [Press R2 to start controller]
frq: 1.4131690123998784 Hz
frq: 49.76984597859364 Hz
frq: 49.75980828320936 Hz
About to calibrate; the robot will stand [Press R2 to calibrate]
Starting pose calibrated [Press R2 to start controller]

Questions:

  1. Could you share the process on how you got the robot to balance on the yoga ball during the calibration phase? It was very unstable and we had to hold the yoga ball in place before we hit R2 the second time. We had a dog leash setup similar to whats shown in the video, but we found that we could only pull back on the robot but could not push it forward to keep it stabilized. When we ran the controller, the robot could not stay on for more than 1 second.
  2. Is the policy checkpoint committed here the best policy you were able to achieve? Is there another updated policy you trained with your methods that you could share? We tried training our own policy using the methods here and could not achieve the average 10 seconds of stability reported in the paper.
@HARISKHAN-1729
Copy link

Hi, one question. I am trying to replicate Forward_walking! what is the memory requirement to run the system, as i use 24gb rtx 3090 and adjust the parameters, but while training it, the cuda just got out of memory. Second, did you face excution errors on some iteration or you got successfully training on each and every iteration!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants