+Advanced Search

D3DQN-CAA:a DRL-based Adaptive Edge Computing Task Scheduling Method
Author:
Affiliation:

Fund Project:

  • Article
  • |
  • Figures
  • |
  • Metrics
  • |
  • Reference
  • |
  • Related
  • |
  • Cited by
  • |
  • Materials
    Abstract:

    To solve the problems faced by the existing edge computing task scheduling based on deep reinforcement learning, such as fixed action space exploration, low sample efficiency, large memory demand and poor stability and to better carry out effective task scheduling in the edge computing system with relatively limited computing resources, an adaptive edge computing task scheduling method D3DQN-CAA is proposed based on the improved deep reinforcement learning model D3DQN (Dueling Double DQN). In the task offloading decision, the corresponding relationship between the task and processor is regarded as a multidimensional knapsack problem, and the computing node with the highest matching degree is selected for task processing according to the state information of the current scheduled task and the computing node; For improving the parameters updating efficiency of the evaluation network and reducing the influence of overestimation, a comprehensive Q-value calculation method is proposed; For accelerating the convergence speed of neural networks, an adaptive dynamic exploration degree of action space adjustment strategy is proposed; For reducing the storage resources required and improving the sample efficiency, an adaptive lightweight prioritized playback mechanism is proposed. Experimental results show that compared with multiple benchmark algorithms, the D3DQN-CAA algorithm can effectively reduce the number of training steps of deep reinforcement learning networks and make full use of edge computing resources to improve the real-time performance of task processing and reduce the system energy consumption.

    Reference
    Related
    Cited by
Article Metrics
  • PDF:
  • HTML:
  • Abstract:
  • Cited by:
Get Citation
History
  • Received:
  • Revised:
  • Adopted:
  • Online: July 05,2024
  • Published: