With the increasing penetration of renewable energy in power systems, distributed energy management faces numerous challenges including high-dimensional state spaces, multi-objective optimization, and realtime decision-making. This paper proposes a Large Language Model (LLM)-guided Multi-Agent Deep Reinforcement Learning (MADRL) framework for distributed energy management in renewable-integrated power systems. Building upon recent advances in LLM-guided reinforcement learning, we develop specialized mechanisms for power system control that leverage the semantic understanding and knowledge reasoning capabilities of LLMs to provide high-level strategic guidance and scenario-adaptive adjustments for MADRL agents. Specifically, we design a hierarchical architecture where the LLM layer is responsible for parsing grid operation states, generating optimization objective descriptions, and coordinating multi-agent behaviors, while the MADRL layer executes specific energy scheduling decisions. Experiments are conducted on real power grid datasets containing photovoltaic, wind power, energy storage systems, and flexible loads. Results demonstrate that the proposed method significantly outperforms traditional baseline methods in reducing operating costs, improving renewable energy utilization rates, and ensuring grid stability. Compared to standard MADRL, our method reduces system operating costs by 18.7%, decreases renewable energy curtailment by 23.4%, and improves convergence speed by 3.2 times. This study provides a novel approach for adaptive distributed energy management in smart grids.OPEN ACCESS Received: 15/11/2025 Accepted: 15/01/2026 Published: 16/04/2026
Published on 16/04/26
Accepted on 15/01/26
Submitted on 15/11/25
Volume 42, Issue 3, 2026
DOI: 10.23967/j.rimni.2026.10.76155
Licence: CC BY-NC-SA license
Are you one of the authors of this document?