Skip to content

Latest commit

 

History

History
38 lines (24 loc) · 1.52 KB

README.rst

File metadata and controls

38 lines (24 loc) · 1.52 KB

Travis Python27 Python35 PyPi License

DeeR

DeeR is a python library for Deep Reinforcement. It is build with modularity in mind so that it can easily be adapted to any need. It provides many possibilities out of the box (prioritized experience replay, double Q-learning, etc). Many different environment examples are also provided (some of them using OpenAI gym).

Full Documentation

The documentation is available at : http://deer.readthedocs.io/en/master/

Dependencies

This framework is tested to work under Python 2.7, and Python 3.5. It should also work with Python 3.3 and 3.4.

The required dependencies are NumPy >= 1.10, joblib >= 0.9. You also need theano >= 0.7 (lasagne is optional) or you can write your own neural network using your favorite framework.

For running the examples, Matplotlib >= 1.1.1 is required. For running the atari games environment, you need to install ALE >= 0.4.