Skip to content

MountainCarContinuous v0

Olivier Sigaud edited this page Aug 28, 2016 · 10 revisions

Overview

Details

Name: MountainCarContinuous-v0 Category: Classic Control Environment Page
Algorithms Page

Description

An underpowered car must climb a hill.

Source

This environment corresponds to the continuous version of the mountain car environment described in Andrew Moore's PhD thesis.

Environment

Observation

Type: Box(2) Num Observation Min Max 0 Car Position -1.0 1.0 1 Car Velocity -1.0 1.0

Actions

Type: Box(1) Num Action 0 Push car to the left (negative value) or to the right (positive value)

Reward

Reward is 100 for reaching the target of the hill on the right hand side, minus the squared sum of actions from start to goal.

Note that this reward is unusual with respect to most published work, where the goal was to reach the target as fast as possible, hence favouring a bang-bang strategy.

The current reward function raises an exploration challenge, because if you do not reach the target, you will find that it is better not to move, and you will stop reaching the target forever.

Starting State

Position between -0.6 and -0.4, null velocity

Episode Termination

Position equals 0.5

Solved Requirements

Get a reward over 90 (I'm not sure this is doable), should be tuned.