New research examines the question of whether large-language models (LLMs), like those that power ChatGPT, can help run and maintain the energy grid.
The research, co-authored by Na Li, Winokur Family Professor of Electrical Engineering and Applied Mathematics at the Harvard John A. Paulson School of Engineering and Applied Sciences “suggests that LLMs could play an important role in co-managing some aspects of the grid, including emergency and outage response, crew assignments and wildfire preparedness and prevention. But security and safety concerns need to be addressed before LLMs can be deployed in the field,” a June 19 post on the Harvard John A. Paulson School of Engineering and Applied Sciences website notes.
The research was published in Joule.
The research team, which included engineers from Houston-based energy-provider CenterPoint Energy and the Midcontinent Independent System Operator, used GPT models to explore the capabilities of LLMs in the energy sector — and identified both strengths and weaknesses, the post, written by Leah Burrows, Assistant Director Of Communications at Harvard John A. Paulson School of Engineering and Applied Sciences, said.
“The strengths of LLMs -- their ability to generate logical responses from prompts, to learn based on limited data, to delegate tasks to embedded tools and to work with non-text data such as pictures -- could be leveraged to perform tasks such as detecting broken equipment, real-time electricity load forecasting, and analyzing wildfire patterns for risk assessments,” Burrows noted in her post.
At the same time, “there are significant challenges to implementing LLMs in the energy sector — not the least of which is the lack of grid-specific data to train the models,” she notes in the post. “For obvious security reasons, crucial data about the U.S. power system is not publicly available and cannot be made public.”
Another issue is the lack of safety guardrails.
“The power grid, like autonomous vehicles, needs to prioritize safety and incorporate large safety margin when making real-time decisions,” wrote Burrows.
LLMs also need to get better about providing reliable solutions and transparency around their uncertainties, said Li.