Skip to content

How can I evaluate our VLA model in BEHAVIOR-1K datasets? #959

Answered by ChengshuLi
RZFan525 asked this question in Q&A
Discussion options

You must be logged in to vote

Can you provide more details about your VLA model?

  • V: You can get the robot's onboard sensing from the observation (env.get_obs()), such as RGB, depth, segmentation, etc.
  • L: You can find the task definition of 1000 BEHAVIOR-1K activities, written in BDDL, a predicate logic-based language inspired by PDDL. You can feed this to your model as a type of formal language.
  • A: You can feed the low-level action to the environment with a typical gym interface (env.step(action)). For instance, with an IKController for the arm, the action might mean "moving the end-effector forward by 5cm". You can also check out the high-level action primitives that we provide, where you can specify a primitive act…

Replies: 2 comments

Comment options

You must be logged in to vote
0 replies
Answer selected by ChengshuLi
Comment options

You must be logged in to vote
0 replies
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants
Converted from issue

This discussion was converted from issue #958 on October 10, 2024 17:53.