This repository contains the code for running enliteAI's 1st place submission to the Learning to Run a Power Network (L2RPN) - Energies of the Future and Carbon Neutrality Reinforcement Learning (RL) competition.
For details about our agent as well as background information about grid topology optimization in general we invite you to check out our paper
Power Grid Congestion Management via Topology Optimization with AlphaZero.
Matthias Dorfer∗, Anton R. Fuxjäger∗, Kristian Kozak∗, Patrick M. Blies and Marcel Wasserer.
RL4RealLife Workshop in the 36th Conference on Neural Information Processing Systems (NeurIPS 2022).
(*equal contribution)
Note that our submission goes beyond the agent described and evaluated in the paper with respect to grid specific enhancements such as contingency analysis.
- Running our Submission
- Web Demo - AI Assistant Power Grid Control
- Further Resources
- About our RL Framework Maze
To test our submission just follow the steps listed below.
1. Download and Extract the Getting Started Kit
You can download it from here.
2. Download and Extract the Model Data
You can download model weights and parameters from here.
Next, copy the content of this archive into the directory submission/experiment_data
.
submission
├── experiment_data
│ ├── .hydra
│ ├── redispatching_CA_KNN
│ ├── obs_norm_statistics.pkl
│ └── ...
3. Check our Submission
To check our submission on the 52 chronics of the local validation set run the following docker command (this should take approximately one hour depending on your machine):
docker run -it \
-v <absolute local dir to the extracted starting kit>:/starting_kit \
-v <absolute local dir to this repo>:/submission \
-w /starting_kit bdonnot/l2rpn:wcci.2022.1 \
python check_your_submission.py --model_dir /submission/submission
4. Inspect Results
Once complete the agent should achieve the following score:
0 1
0 score 70.332381
1 duration 3009.441725
You can also find more detailed results in this directory:
<absolute local dir to the extracted starting kit>/utils/last_submission_results/
The figure below shows the workflow of our real-time remedial action recommendation assistant demo as a concrete example for human-in-the-loop decision making.
The central design principle of our AI assistant is to support and enhance human decision making by recommending viable action scenarios along with augmenting information explaining how these recommendations will most likely turn out in the productive system. The final decision is left to the human operator to preserve human control.
- The Grid State Observer continuously monitors the current state of the productive power grid.
- Once a non-safe state is encountered the agent starts a policy network guided tree search to discover a set of topology change action scenarios that are capable of recovering from this critical situation (e.g., relieving the congestion).
- A ranked list of topology change candidates – the top results of the tree search – is presented to the human operator in a graphical user interface for evaluation (testing the impact of a action candidate in a load flow simulation) and selection. Along with the potential for relieving the congestion other grid specific safety considerations are taken into account (e.g., a contingency analysis for n-1 stability of the respective resulting states).
- The human operator evaluates the provided set of suggested topology change actions.
- Once satisfied, he/she confirms the best action candidate for execution on the productive power grid.
- The selected action candidate is applied to the power grid and the resulting state is again fed into the Grid State Observer and visualized for human operators in the graphical user interface. This closes the human in the loop workflow.
- Spotlight Talk and Poster at the NeurIPS 2022 RL4RealLife Workshop.
- enliteAI Energy web page.
- AAIC Energy Keynote on Powergrid Congesti Management with RL.
Maze is an application-oriented deep reinforcement learning (RL) framework, addressing real-world decision problems. Our vision is to cover the complete development life-cycle of RL applications, ranging from simulation engineering to agent development, training and deployment.
If you encounter a bug, miss a feature or have a question that the documentation doesn't answer: We are happy to assist you! Report an issue or start a discussion on GitHub or StackOverflow.