Research News
Deep Learning-Based Method Gives Wireless Control Systems a "Smart Brain"
Editor: ZHANG Nannan | Mar 20, 2026
Print

Wireless networked control systems offer significant advantages in terms of flexibility, scalability and cost-effectiveness, and have therefore become a key enabling technology in modern industrial control. However, the complex radio frequency environment and limited spectrum resources in industrial settings can easily induce random packet loss and non-deterministic communication delays, which can severely degrade the performance of the system's closed-loop control.

In response to the challenges posed by limited communication resources and highly dynamic environmental conditions in industrial scenarios, Prof. LIANG Wei and his research team at the Shenyang Institute of Automation (SIA) of the Chinese Academy of Sciences have proposed a Joint Estimation-Control-Scheduling (JECS) method based on deep reinforcement learning.

Their findings were published in IEEE Transactions on Cognitive Communications and Networking.

Traditional design approaches often treat communication scheduling and control algorithms separately or rely on accurate system models, making it difficult to achieve ideal results in practical applications. Furthermore, many existing studies face scalability issues, as their trained models often fail when the number of subsystems changes or network resources fluctuate.

To address the issue of packet loss in industrial environments, the researchers designed a state estimation model based on deep learning, as well as a control strategy based on deep reinforcement learning, for individual subsystems. This significantly enhances the system's operational stability under unreliable communication constraints. Building on this, they developed a centralized scheduling model using a transformer architecture to coordinate multiple subsystems operating simultaneously. They incorporated a priority scoring mechanism to improve adaptability across varying system scales and resource constraints.

The researchers evaluated JECS using two benchmark tasks: the classic inverted pendulum task in the OpenAI Gym environment and the ball-balancing platform task in a custom environment. Performance was compared against several baseline methods, including AoI minimization, round-robin scheduling and a deep reinforcement learning approach known as DeepCAS.

The results indicate that the JECS method significantly reduces the linear quadratic Gaussian control cost, supporting the stable operation of a greater number of subsystems simultaneously and demonstrating a marked increase in system capacity. Even when compared to the model-dependent CARS method, JECS achieves comparable control performance without requiring any model information.

This study was supported by the National Natural Science Foundation of China, the Liaoning Provincial Natural Science Foundation, and others.

Contact

ZHANG Lei

Shenyang Institute of Automation

E-mail:

Topics
Artificial Intelligence
Related Articles