Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added opensim-rl environment, extended dqn agent for multi-dimensional action space, and a sample configuration and options to config an agent to learn in opensim-rl #13

Open
wants to merge 4 commits into
base: master
Choose a base branch
from

Conversation

praveen-palanisamy
Copy link
Contributor

opensim-rl Is an environment introduced by the NIPS 2017 Learning to run challenge. In this environment, an agent is tasked with learning how to run while avoiding obstacles on the ground. The environment provides a human musculoskeletal model and a physics-based simulation environment which are pretty good. This environment will be useful for training agents that can handle much more complex control tasks even after the NIPS challenge ends. Can be seen as a good alternative or as a complementary environment to Mujoco based environments.

Contributions:

… this environment, the agent is tasked to learn to run while avoiding obstacles on the ground
…n) in the opensim-rl envrionment.NOTE: This is just a sample for configuring an agent to train in the opensim-rl env. DQN without NAF or other improvements is not suitable for the continuous opensim-rl env.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant