How can I evaluate our VLA model in BEHAVIOR-1K datasets? #959
-
How can I evaluate our VLA model in BEHAVIOR-1K datasets? |
Beta Was this translation helpful? Give feedback.
Answered by
ChengshuLi
Oct 10, 2024
Replies: 2 comments
-
Can you provide more details about your VLA model?
Hope this helps! |
Beta Was this translation helpful? Give feedback.
0 replies
Answer selected by
ChengshuLi
-
Thank you for your detailed reply. I want to run OpenVLA, RT-2 and RT-1 models. I'm going to try it based on your instructions. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Can you provide more details about your VLA model?
env.get_obs()
), such as RGB, depth, segmentation, etc.env.step(action)
). For instance, with anIKController
for the arm, the action might mean "moving the end-effector forward by 5cm". You can also check out the high-level action primitives that we provide, where you can specify a primitive act…