# Constrained Deep Reinforcement Learning for Smart Load Balancing

Omar Houidi, Djamal Zeghlache, Victor Perrier, Tran Anh Quang Pham, Nicolas Huin, Jeremie Leguay, Paolo Medagliani: Constrained Deep Reinforcement Learning for Smart Load Balancing. In: 2022 IEEE 19th Annual Consumer Communications & Networking Conference (CCNC), Forthcoming.

## Abstract

In this paper, we explore the use of an actor-critic architecture for Deep Reinforcement Learning (DRL) to improve load balancing beyond traditional algorithms. Some centralized Reinforcement Learning (RL) algorithms have targeted the reward function expression the Quality of Experience (QoE) for video flows, but this requires access to clients or the Maximum Link Utilization (MLU) for other types of flows. In our approach, we tune the actor-critic algorithm to only leverage QoS parameters to load balance traffic in the network and maximize the QoE experienced by the users. This avoids collecting observations and performance measurements from client applications, as it only focuses on network metrics that can be easily measured. We explore both centralized and distributed solutions to assess the feasibility of the proposed smart load balancing solutions. We compare them to ECMP, QoE-based reward methods, and RILNET that uses an underlying DDPG optimization approach. The proposed algorithms outperform previous approaches.

@inproceedings{nokey,
title = {Constrained Deep Reinforcement Learning for Smart Load Balancing},
author = {Omar Houidi and Djamal Zeghlache and Victor Perrier and Tran Anh Quang Pham and Nicolas Huin and Jeremie Leguay and Paolo Medagliani},
url = {undefined},
year  = {2022},
date = {2022-01-08},
booktitle = {2022 IEEE 19th Annual Consumer Communications & Networking Conference (CCNC)},
abstract = {In this paper, we explore the use of an actor-critic architecture for Deep Reinforcement Learning (DRL) to improve load balancing beyond traditional algorithms. Some centralized Reinforcement Learning (RL) algorithms have targeted the reward function expression the Quality of Experience (QoE) for video flows, but this requires access to clients or the Maximum Link Utilization (MLU) for other types of flows. In our approach, we tune the actor-critic algorithm to only leverage QoS parameters to load balance traffic in the network and maximize the QoE experienced by the users. This avoids collecting observations and performance measurements from client applications, as it only focuses on network metrics that can be easily measured. We explore both centralized and distributed solutions to assess the feasibility of the proposed smart load balancing solutions. We compare them to ECMP, QoE-based reward methods, and RILNET that uses an underlying DDPG optimization approach. The proposed algorithms outperform previous approaches.},
keywords = {Deep Reinforcement Learning, Intelligent Routing, international conference, QoE Optimization, Smart Load Balancing},
pubstate = {forthcoming},
tppubtype = {inproceedings}
}