👋 This is the code repo for the paper "Large Foundation Models for Power Systems"
😎 Authors: Chenghao Huang, Siyang Li, Ruohong Liu, Hao Wang, and Yize Chen
Monash University & Hong Kong University of Science and Technology
Foundation models, such as Large Language Models (LLMs), can respond to a wide range of format-free queries without any task-specific data collection or model training, creating various research and application opportunities for the modeling and operation of large-scale power systems. Here we investigate the potential of existing foundation models by validating their performance on four representative tasks across power system domains, including the optimal power flow (OPF), electric vehicle (EV) scheduling, knowledge retrieval for power engineering technical reports, and situation awareness. Our results indicate strong capabilities of such foundation models on boosting the efficiency and reliability of power system operational pipelines.
To run LLM for different tasks, now we support running .py files to query pre-trained models. We included GPT as trained LLMs in this repo. Simply modify the OpenAI key in each Python script, switch between different models of GPT, and the tasks can be implemented.