TY - GEN
T1 - Using deep reinforcement learning in optimal energy management for residential house aggregators with uncertain user behaviors
AU - Lin, Yujun
AU - Yan, Linfang
AU - Hui, Hongxun
AU - Chen, Yin
AU - Chen, Xia
AU - Wen, Jinyu
PY - 2024/11/25
Y1 - 2024/11/25
N2 - In this study, the home energy management problem, which can be regarded as a high-dimensional optimization problem, for numerous residential houses, is addressed. The concept of the aggregator is utilized to reduce the state and action space and to handle the high dimensionality. A two-stage deep reinforcement learning (DRL)-based approach is proposed for the aggregators to track the schedule from a superior grid and guarantee the operation constraints. In the first stage, a DRL control agent is set to learn the optimal scheduling strategy interacting with the environment based on the soft-actor-critic framework and generate the aggregate control actions. In the second stage, the aggregate control actions are disaggregated to individual appliances considering the users' behaviors. The uncertainty of an electric vehicle's charging demand is quantitatively expressed based on the driver's experience. An aggregate anxiety concept is introduced to characterize the driver's anxiety on the electric vehicle's range and uncertain events. Finally, simulations are conducted to verify the effectiveness of the proposed approach under dynamic user behaviors, and comparisons show the superiority of the proposed approach over other benchmark methods.
AB - In this study, the home energy management problem, which can be regarded as a high-dimensional optimization problem, for numerous residential houses, is addressed. The concept of the aggregator is utilized to reduce the state and action space and to handle the high dimensionality. A two-stage deep reinforcement learning (DRL)-based approach is proposed for the aggregators to track the schedule from a superior grid and guarantee the operation constraints. In the first stage, a DRL control agent is set to learn the optimal scheduling strategy interacting with the environment based on the soft-actor-critic framework and generate the aggregate control actions. In the second stage, the aggregate control actions are disaggregated to individual appliances considering the users' behaviors. The uncertainty of an electric vehicle's charging demand is quantitatively expressed based on the driver's experience. An aggregate anxiety concept is introduced to characterize the driver's anxiety on the electric vehicle's range and uncertain events. Finally, simulations are conducted to verify the effectiveness of the proposed approach under dynamic user behaviors, and comparisons show the superiority of the proposed approach over other benchmark methods.
KW - Home energy management
KW - electric vehicles (EVs)
KW - deep reinforcement learning
KW - soft actor-critic
KW - dynamic user behaviors
U2 - 10.1109/scems63294.2024.10756288
DO - 10.1109/scems63294.2024.10756288
M3 - Conference contribution book
SN - 979-8-3503-6695-2
T3 - 2024 IEEE 7th Student Conference on Electric Machines and Systems (SCEMS)
SP - 1
EP - 6
BT - 2024 IEEE 7th Student Conference on Electric Machines and Systems (SCEMS)
PB - IEEE
ER -