MOBA AI Gamer trains a model to play multiplayer video games (League of Legends) like human players. It is my personal deep dive project into machine learning topics in computer vision, neural networks model building, and reinforcement learning.
This project was inspired by Oliver Struckmeier’s LeagueAI project and uses his automated training data generation methodology outlined in his paper on the subject.
MOBA AI Gamer simulates how a human would interact and play with League of Legends through analyzing a screen to identify game objects and make basic strategy decisions and perform actions based on the information. To do this LeagueHumanPlayerAI combines:
- Object Detection | YoloV5
- Optical Character Recognition | Tesseract/TesserOCR
- Reinforcement Learning | OpenAI Gym & PFRL
- Automate object detection data collection and generation using OpenCV, PyAutoGUI, and PIL
- Train an object detection model to extract metadata from a screen output using YoloV5 and PyTorch
- Train a Deep Q-Learning algorithm using metadata gained from YoloV5 and Tesseract OCR
- Object Detection (Working)
- Attribute Observation (Working)
- Gameplay Learning (Work in Progress)
- Team Play (Future Work)
To detect objects on screen LeagueHumanPlayerAI uses YoloV5 to identify and locate 20 objects on screen.
- [email protected]: 0.83
- mAP@[0.5:0.95]: 0.61
- Ezreal
- Ezreal Dead
- Red Tower
- Red Melee Minion
- Red Melee Minion Dead
- Red Ranged Minion
- Red Ranged Minion Dead
- Red Siege Minion
- Red Siege Minion Dead
- Red Super Minion
- Red Super Minion Dead
- Blue Tower
- Blue Melee Minion
- Blue Melee Minion Dead
- Blue Ranged Minion
- Blue Ranged Minion Dead
- Blue Siege Mininon
- Blue Siege Minion Dead
- Blue Super Minion
- Blue Super Minion Dead
3D models of each of the 11 unique objects and its animations were extracted from the League of Legends client using LeagueBulkConvert. Due to an animation bug with the red ranged minion model, both blue and red ranged minions are the same model (blue ranged minion) with different skin colors.
Once the 3D models were obtained, a video was recorded of each model rotating with a viewpoint approximately the same as in game (~55 degrees above front view). Then using modified versions of Oliver Struckmeirer's data generation code, generated dataset images were generated for object detection training.
- Add more objects
- Champions and related pets
- Jungle Monsters
- Inhibitors (Red and Blue)
- Nexus (Red and Blue)
- Improve Dataset Generation Quality
- Increase Object Detection Accuracy
Cropping the screen and using Tesseract, attribute information like minion count and kills can be tracked.
- Minion Count
- Kills
- Deaths
- Assists
- Add more attributes
- Health
- Mana
- Gold
- Level
- Selected target's health
Currently a work in progress creating a reinforcement learning model
- Plan: Use Deep Q-Learning to train an intelligent agent to play League of Legends
A combination of both object detection and attribute observation to attack and use a random ability on the first red melee minion seen is working.
- Create a Deep Q-Learning algorithm
- Train an agent to play
TODO