Publications

Container Scheduling with Dynamic Computing Resource for Microservice Deployment in Edge Computing

Authors: Jingxi Lu, Wenhao Li, Jianxiong Guo, Xingjian Ding, Zhiqing Tang, Tian Wang
Conference: MSN 2024 - 20th International Conference on Mobility, Sensing and Networking, December 2024
Status: Accepted, awaiting publication

Abstract

With the rapid growth of Internet of Things (IoT) devices, deploying microservices efficiently on resource-constrained edge nodes is a key challenge. Existing container scheduling algorithms often neglect dynamic computing resource allocation, assuming uniform CPU distribution, which is impractical.

This paper formulates an online microservice scheduling problem with dynamic computing power to minimize delay and energy consumption. We propose a Soft Actor-Critic (SAC)-based reinforcement learning algorithm integrated with a GRU-enhanced policy network and an action selection mechanism for faster convergence. Experimental results show that our method outperforms baseline algorithms by up to 65% in total performance.

πŸ“„ Download Paper
πŸ”— DOI (Coming Soon)

Recommended Citation:
J. Lu, W. Li, J. Guo, X. Ding, Z. Tang, and T. Wang, β€œContainer Scheduling with Dynamic Computing Resource for Microservice Deployment in Edge Computing,” in MSN 2024.


A Reinforcement Learning Approach for Initialization of Column Generation with Application to Aircraft Recovery Problem

Authors: Jingxi Lu, Xiongwen Qian
Conference: LESCDT 2024 - 2nd International Conference on Logistics Engineering, Supply Chain and Digital Transformation, October 2024
Status: Accepted, awaiting publication

Abstract

Column Generation is widely used in large-scale combinatorial problems, such as aircraft recovery. However, initializing columns remains a critical but underexplored challenge that directly impacts computational efficiency.

This paper introduces a reinforcement learning-based initialization method, where an agent utilizes a Graph Attention Network (GAT) with Proximal Policy Optimization (PPO) to generate effective initial columns. Experimental results demonstrate that our approach significantly accelerates column generation and generalizes well across different network structures.

πŸ“„ Download Paper
πŸ”— DOI (Coming Soon)

Recommended Citation:
J. Lu and X. Qian, β€œA Reinforcement Learning Approach for Initialization of Column Generation with Application to Aircraft Recovery Problem,” in LESCDT 2024.