Attention: the Project at the University of Freiburg was completed 2020. It is archived (Read-Only) to preserve it's state as it was in 2020 at the end of the project.
This is a project supervised by the chair of neurorobotics at the University of Freiburg. We want to do a project with the Deepmind PYSC2 environment and intend to replicate and to improve the agent's performance by implementing our own ideas into the framework.
To use this framework, the following gym environments have to be installed as well:
For the gym wrapper for PYSC2: https://github.com/Teslatic/gym-sc2
For gym toyproblem examination: https://github.com/Teslatic/gym-toyproblems
- Read & Understand the paper
- Gather first hands on experience with PYSC2
- play a game as human
- load random agent
- watch a replay
- watch a replay in rendered version (currently only works on windows, regular SC2 game required)
- Write first rule based agent as a simple tutorial (refer to link below)
- Replicate Minigame Results from the paper with the same or with own architectures
- Test own simple agent on full game
- Find a promising approach for our own implementation, i.e. which Algorithm, Network Structures etc.
- Test own implementation on Minigames
- Create additional minigames
- Test on new minigames
- Test on full game
- Document our findings
- Deepmind/Blizzard SC2 Paper: https://arxiv.org/abs/1708.04782
- The Deepmind - PYSC2 Repo: https://github.com/deepmind/pysc2
- Good first guide for the PYSC2 Environment: https://chatbotslife.com/building-a-basic-pysc2-agent-b109cde1477c
- StarCraft2 AI Wiki (non-official): http://wiki.sc2ai.net/Main_Page