Industrial wireless network resource allocation method based on multi-agent deep reinforcement learning

文档序号:142956 发布日期:2021-10-22 浏览:37次 中文

阅读说明:本技术 基于多智能体深度强化学习的工业无线网络资源分配方法 (Industrial wireless network resource allocation method based on multi-agent deep reinforcement learning ) 是由 于海斌 刘晓宇 许驰 夏长清 金曦 曾鹏 于 2021-06-24 设计创作,主要内容包括:本发明涉及工业无线网络技术,具体地说,是一种基于多智能体深度强化学习的工业无线网络资源分配方法,包括以下步骤:建立端边协同的工业无线网络;确立工业无线网络端边资源分配的优化问题;建立马尔科夫决策模型;采用多智能体深度强化学习方法,构建资源分配神经网络模型;离线训练神经网络模型,直至奖励收敛到稳定值;基于离线训练结果,工业无线网络在线执行资源分配,处理工业任务。本发明能够实时、高能效地对工业无线网络进行端边协同的资源分配,在满足有限能量、计算资源约束下,最小化系统开销。(The invention relates to an industrial wireless network technology, in particular to an industrial wireless network resource allocation method based on multi-agent deep reinforcement learning, which comprises the following steps: establishing an end edge cooperative industrial wireless network; establishing an optimization problem of the edge resource allocation of the industrial wireless network; establishing a Markov decision model; constructing a resource distribution neural network model by adopting a multi-agent deep reinforcement learning method; training the neural network model off line until the reward converges to a stable value; and based on the offline training result, the industrial wireless network performs resource allocation on line and processes the industrial task. The invention can carry out end edge cooperative resource allocation on the industrial wireless network in real time and high energy efficiency, and minimizes the system overhead under the condition of meeting the constraint of limited energy and computing resources.)

1. The industrial wireless network resource allocation method based on multi-agent deep reinforcement learning is characterized by comprising the following steps:

1) establishing an end edge cooperative industrial wireless network;

2) based on the industrial wireless network with cooperative end sides, the optimization problem of the resource distribution of the end sides of the industrial wireless network is established;

3) establishing a Markov decision model according to an optimization problem;

4) adopting multi-agent deep reinforcement learning to construct a resource distribution neural network model;

5) performing offline training on the resource distribution neural network model by using a Markov decision model until the reward converges to a stable value;

6) and based on the offline training result, the industrial wireless network performs resource allocation on line and processes the industrial task.

2. The multi-agent deep reinforcement learning-based industrial wireless network resource allocation method according to claim 1, wherein the end-edge collaborative industrial wireless network comprises: n industrial base stations and M industrial terminals;

the industrial base station has edge computing capability to provide computing resources for the industrial terminal, is used for scheduling the industrial terminal within the network coverage range and is used for communication between the industrial terminal and the industrial base station;

the industrial terminal is used for generating different types of industrial tasks in real time and is communicated with the industrial base station through a wireless channel.

3. The multi-agent deep reinforcement learning-based industrial wireless network resource allocation method according to claim 1, wherein the optimization problem of industrial wireless network end edge resource allocation is as follows:

s.t.

C1:0≤pm≤P,

wherein the content of the first and second substances,represents the overhead of the system; t ismRepresenting the time delay of the industrial terminal m; emRepresents the energy consumption of the industrial terminal m; ω represents the time delay weight and (1- ω) represents the energy consumption weight;a set of industrial base stations is represented, a collection of industrial terminals is represented that,

c1 is the energy constraint of the industrial terminal m, pmRepresenting the transmitting power of the industrial terminal m, and P represents the maximum transmitting power;

c2 is a constraint on the computational resources,indicating the computing resources, F, allocated to an industrial terminal m by an industrial base station nnThe maximum computing resource of the industrial base station n is represented, and the sum of the computing resources obtained by the industrial terminals unloaded to the industrial base station n does not exceed the maximum computing resource of the industrial base station n;

c3 is a calculation resource constraint, and the calculation resource obtained by the industrial terminal m unloaded to the industrial base station n must not exceed the maximum calculation resource of the industrial base station n;

c4 is the calculation of decision constraints,presenting industrial terminalsm, the industrial terminal m can only select to process the industrial task locally, i.e.Or off-load industrial tasks to an industrial base station n, i.e.

C5 is a computational decision constraint that can only be offloaded to a set of industrial base stations if industrial terminal m offloads an industrial taskAn industrial base station.

4. The multi-agent deep reinforcement learning-based industrial wireless network resource allocation method according to claim 1, wherein the Markov decision model is a process for realizing long-term cumulative reward optimization by executing different action vectors between state vectors, and the transition probability is described as:

wherein the transition probability of transitioning from an arbitrary state vector to another state vector is fm,fm(t)*Representing the optimal transition probability between the state vectors at the time slot t,the system is subjected to long-term accumulated reward, gamma represents a discount proportion, and tau represents a time slot; r ism(t)=ωrm,d(t)+(1-ω)rm,e(t);

The Markov decision model comprises a state vector, an action vector and an incentive vector, wherein:

the state vector is the state of the industrial terminal m in the time slot t and is expressed asWhereinRepresents the calculation decision of the industrial terminal m at the beginning of the time slot t, dm(t) represents the data size of the industrial task generated by the industrial terminal m in the time slot t, cm(t) represents the required computational resources for an industrial task generated by an industrial terminal m at a time slot t,the distance between the industrial terminal m and all the N industrial base stations in the time slot t is represented;

the motion vector is the motion of the industrial terminal m in the time slot t and is expressed as am(t)={am,o(t),am,p(t) }, in which am,o(t) represents the calculation decision of the industrial terminal m at the end of the time slot t, am,p(t) represents the transmission power of the industrial terminal m at the end of the time slot t;

the reward vector is the reward obtained by the industrial terminal m in the time slot t and is represented as rm(t)={rm,d(t),rm,e(t) }, in which rm,d(t) represents the time delay reward of the industrial terminal m in the time slot t, rm,e(t) represents the energy consumption reward of the industrial terminal m at the time slot t.

5. The multi-agent deep reinforcement learning-based industrial wireless network resource allocation method according to claim 1, wherein the step 4) is specifically as follows:

each industrial terminal is an intelligent agent and consists of an actor structure and a critic structure;

the actor structure consists of an actor-eval deep neural network and an actor-target deep neural network: the operator-even deep neural network and the operator-target deep neural network model parameter set are combined into a wholeWherein the content of the first and second substances,representing the neuron number of the input layer of the operator-even deep neural network and the operator-target deep neural network,representing the number of neuron of hidden layer of operator-even deep neural network and operator-target deep neural network,expressing the neuron number of the output layer of the operator-even deep neural network and the operator-target deep neural network, thetaπRepresenting the actor-eval deep neural network hyper-parameter,representing an operator-target deep neural network hyper-parameter;

the critic structure consists of a critic-eval deep neural network and a critic-target deep neural network: the critic-eval deep neural network model parameter set and the critic-target deep neural network model parameter set are combined intoWherein the content of the first and second substances,representing the number of neurons of the input layer of the critic-eval deep neural network and the critic-target deep neural network,representing the number of hidden layer neurons of the critic-eval deep neural network and the critic-target deep neural network,representing critic-eval deep neural network and critic-Number of neurons in output layer of target deep neural network, θQRepresenting critic-eval deep neural network hyperparticipation,representing the critic-target deep neural network super parameter.

6. The multi-agent deep reinforcement learning-based industrial wireless network resource allocation method according to claim 1, wherein the step 5) comprises the steps of:

5.1) State vector s of the Current time Slot of Industrial terminal mmAnd the state vector s 'of the next slot'mInput operator structure, output motion vector amAnd a'mTo obtain a reward rmAnd r'm

5.2) cyclically executing step 5.1) for each industrial terminal, storing each time slot<sm(t),am(t),rm(t)>Obtaining K experiences as experience pools, respectively storing the K experiences into two experience pools according to different weights of the experiences, wherein K is a constant;

5.3) state vector of current time slot of all industrial terminalsMotion vector of current time slotAnd the state vector of the next time slotMotion vector of next time slotInputting criticc structure of industrial terminal m, respectively outputting value function

5.4) updating formula according to reinforced learning BellmanUpdating actor-eval deep neural network hyper-parameter theta by using random gradient descent methodπAnd critic-eval deep neural network hyper-parameter thetaQ

5.5) utilization ofUpdating operator-target deep neural network hyper-parametersBy usingUpdating actor-eval deep neural network hyper-parametersWherein, the lambda is an update factor, and the lambda belongs to [0,1 ]];

5.6) executing the priority weight experience playback, and repeating the steps 5.1) -5.5) until the reward is converged to a stable value, thereby obtaining the trained multi-agent deep reinforcement learning model.

7. The multi-agent deep reinforcement learning-based industrial wireless network resource allocation method according to claim 6, wherein in the step 5.1), a greedy algorithm is adopted to dynamically change the action vector output probability, specifically:

use ofGreedy method selects output motion vector, where ar(t) denotes a randomly selected motion vector, av(t) represents selecting the motion vector with the largest reward;

the epsilon is (1-delta)Uε0Which is indicative of the probability of selection,wherein epsilon0Denotes the initial selection probability, δ denotes the decay rate, and U denotes the number of training.

8. The multi-agent deep reinforcement learning-based industrial wireless network resource allocation method according to claim 6, wherein in the step 5.2), two experience pools are set, the experiences with different weights are stored respectively, and the probability of extracting the experience in the different experience pools is dynamically changed along with the change of the training times of the neural network model, specifically:

since different experiences contribute differently to the convergence of the deep neural network, the gradient of descent of each experienceAs a weight of experience;

averaging the weights of any K experiences, i.e.Experience with weights higher than the mean of the weights, i.e.For high-weight experiences, the weight is lower than the weight average, i.e. experiencesLow weight experience;

a, B two experience pools are set, wherein a pool A stores high weight experience, and a pool B stores low weight experience; in the initial training stage, the probability of A, B experience pool random sampling is equal, along with the increase of training times, the sampling probability of the experience pool A is gradually increased, and the sampling probability of the experience pool B is gradually decreased; the sampling probability isx belongs to { A, B }, wherein 0 is more than or equal to gx1 or less represents the sampling probability of A, B empirical pool, g0The initial sampling probability of the empirical pool is shown A, B,the sample probability decay values for the empirical pool are shown A, B, and U represents the number of training sessions.

9. The multi-agent deep reinforcement learning-based industrial wireless network resource allocation method as claimed in claim 6, wherein in the step 5.4), the actor-eval deep neural network gradient iscritic-eval deep neural network gradient ofWhereinRepresenting the descending gradient of the actor-eval deep neural network,representing the descent gradient of the critic-eval deep neural network, gamma representing the discount rate,representing the mathematical expectation, and pi represents the current strategy of the actor-eval deep neural network.

10. The multi-agent deep reinforcement learning-based industrial wireless network resource allocation method according to claim 1, wherein the step 6) comprises the steps of:

6.1) State vector s of current time slot t of industrial terminal mm(t) as input of actor structure of mth agent finished off-line training, obtaining output motion vector am(t);

6.2) based on the obtained output motion vector am(t) industrial terminal m according to am(t) computational decisions, issuesThe transmission power is distributed and calculated, energy resources are allocated, and industrial tasks are processed;

6.3) executing the steps 6.1) to 6.2) on all M industrial terminals in the industrial wireless network to obtain resource allocation results of the M industrial terminals, and processing industrial tasks according to the resource allocation results.

Technical Field

The invention relates to resource allocation under the constraint of limited energy and computing resources, belongs to the technical field of industrial wireless networks, and particularly relates to an industrial wireless network resource allocation method based on multi-agent deep reinforcement learning.

Background

With the development of industry 4.0, a large number of distributed industrial terminals are interconnected and intercommunicated through an industrial wireless network, and massive industrial tasks which are intensive in calculation and sensitive in time delay are generated. However, the energy and computing resources local to the industrial terminal are limited and it is difficult to meet the quality of service requirements of the industrial task.

The edge computing server deployed on the network edge side can provide computing resource support for the industrial terminal nearby, but the large-scale concurrent unloading of the industrial terminal can cause the problems of full load of the edge computing server and wireless link congestion, and extra time delay and energy consumption are increased. The method is an effective solution for jointly allocating the energy and the computing resources of the industrial terminal and the computing resources of the edge server and establishing the industrial wireless network resource allocation with the cooperation of the end edge.

However, the conventional resource allocation algorithm is generally based on a known system model, and for an industrial scene of large-scale machine-to-machine communication, the number of industrial terminals and data are time-varying, an accurate system model is difficult to obtain, and the state space of the algorithm is exploded. The deep reinforcement learning can utilize a deep learning estimation system model and solve resource allocation by combining the reinforcement learning, and effectively solves the problems of difficult modeling and state space explosion of the system model.

However, the currently common deep reinforcement learning algorithm is based on a single agent, that is, an agent modeling system model with global system information is used to solve the resource allocation strategy. However, the industrial terminal is mobile, the amount of energy and computational resources is time-varying, it is difficult for a single agent to track the dynamic changes of the system information, and meanwhile, the time delay and energy consumption are increased when the single agent collects the global system information. From the perspective of multiple intelligent agents, each industrial terminal observes local system information, and a resource allocation strategy is solved through a cooperative modeling system model, so that the defect of a single intelligent agent deep reinforcement learning method is effectively overcome.

Disclosure of Invention

In order to achieve the purpose, the invention provides an industrial wireless network resource allocation method based on multi-agent deep reinforcement learning. The method aims at the problem that an industrial terminal with limited energy and computing resources in an industrial wireless network is difficult to support local real-time efficient processing of computationally intensive and delay-sensitive industrial tasks, and considers the difficulties of difficult modeling and algorithm state space explosion in the traditional method caused by dynamic time-varying characteristics of the industrial wireless network, and particularly when a large-scale industrial terminal requests industrial task processing, under the constraint of limited energy and computing resources, a multi-agent deep reinforcement learning algorithm is based on, resource allocation is carried out on a plurality of industrial terminals in the industrial wireless network in real time and efficiently, and system overhead is minimized.

The technical scheme adopted by the invention for realizing the purpose is as follows:

the industrial wireless network resource allocation method based on multi-agent deep reinforcement learning comprises the following steps:

1) establishing an end edge cooperative industrial wireless network;

2) based on the industrial wireless network with cooperative end sides, the optimization problem of the resource distribution of the end sides of the industrial wireless network is established;

3) establishing a Markov decision model according to an optimization problem;

4) adopting multi-agent deep reinforcement learning to construct a resource distribution neural network model;

5) performing offline training on the resource distribution neural network model by using a Markov decision model until the reward converges to a stable value;

6) and based on the offline training result, the industrial wireless network performs resource allocation on line and processes the industrial task.

The edge-coordinated industrial wireless network comprises: n industrial base stations and M industrial terminals;

the industrial base station has edge computing capability to provide computing resources for the industrial terminal, is used for scheduling the industrial terminal within the network coverage range and is used for communication between the industrial terminal and the industrial base station;

the industrial terminal is used for generating different types of industrial tasks in real time and is communicated with the industrial base station through a wireless channel.

The optimization problem of the edge resource allocation of the industrial wireless network is as follows:

s.t.

C1:0≤pm≤P,

wherein the content of the first and second substances,represents the overhead of the system; t ismRepresenting the time delay of the industrial terminal m; emRepresents the energy consumption of the industrial terminal m; ω represents the time delay weight and (1- ω) represents the energy consumption weight;a set of industrial base stations is represented,a collection of industrial terminals is represented that,

c1 is the energy constraint of the industrial terminal m, pmRepresenting the transmitting power of the industrial terminal m, and P represents the maximum transmitting power;

c2 is a constraint on the computational resources,indicating the computing resources, F, allocated to an industrial terminal m by an industrial base station nnThe maximum computing resource of the industrial base station n is represented, and the sum of the computing resources obtained by the industrial terminals unloaded to the industrial base station n does not exceed the maximum computing resource of the industrial base station n;

c3 is a calculation resource constraint, and the calculation resource obtained by the industrial terminal m unloaded to the industrial base station n must not exceed the maximum calculation resource of the industrial base station n;

c4 is the calculation of decision constraints,representing the computational decision of the industrial terminal m, which can only choose to process industrial tasks locally, i.e.Or off-load industrial tasks to an industrial base station n, i.e.

C5 is a computational decision constraint that can only be offloaded to a set of industrial base stations if industrial terminal m offloads an industrial taskAn industrial base station.

The Markov decision model is a process for realizing long-term accumulated reward optimization by executing different action vectors among state vectors, and the transition probability is described as follows:

wherein the transition probability of transitioning from an arbitrary state vector to another state vector is fm,fm(t)*Representing the optimal transition probability between the state vectors at the time slot t,the system is subjected to long-term accumulated reward, gamma represents a discount proportion, and tau represents a time slot; r ism(t)=ωrm,d(t)+(1-ω)rm,e(t);

The Markov decision model comprises a state vector, an action vector and an incentive vector, wherein:

the state vector is the state of the industrial terminal m in the time slot t and is expressed asWhereinRepresents the calculation decision of the industrial terminal m at the beginning of the time slot t, dm(t) represents the data size of the industrial task generated by the industrial terminal m in the time slot t, cm(t) represents the required computational resources for an industrial task generated by an industrial terminal m at a time slot t,the distance between the industrial terminal m and all the N industrial base stations in the time slot t is represented;

the motion vector is the motion of the industrial terminal m in the time slot t and is expressed as am(t)={am,o(t),am,p(t) }, in which am,o(t) represents the calculation decision of the industrial terminal m at the end of the time slot t, am,p(t) represents the transmission power of the industrial terminal m at the end of the time slot t;

the reward vector is the reward obtained by the industrial terminal m in the time slot t and is represented as rm(t)={rm,d(t),rm,e(t) }, in which rm,d(t) represents the time delay reward of the industrial terminal m in the time slot t, rm,e(t) represents the energy consumption reward of the industrial terminal m at the time slot t.

The step 4) is specifically as follows:

each industrial terminal is an intelligent agent and consists of an actor structure and a critic structure;

the actor structure consists of an actor-eval deep neural network and an actor-target deep neural network: the operator-even deep neural network and the operator-target deep neural network model parameter set are combined into a wholeWherein the content of the first and second substances,representing the neuron number of the input layer of the operator-even deep neural network and the operator-target deep neural network,representing the number of neuron of hidden layer of operator-even deep neural network and operator-target deep neural network,expressing the neuron number of the output layer of the operator-even deep neural network and the operator-target deep neural network, thetaπRepresenting the actor-eval deep neural network hyper-parameter,representing an operator-target deep neural network hyper-parameter;

the critic structure consists of a critic-eval deep neural network and a critic-target deep neural network: the critic-eval deep neural network model parameter set and the critic-target deep neural network model parameter set are combined intoWherein the content of the first and second substances,representing the number of neurons of the input layer of the critic-eval deep neural network and the critic-target deep neural network,representing the number of hidden layer neurons of the critic-eval deep neural network and the critic-target deep neural network,representing the number of neuron of output layer of critic-eval deep neural network and critic-target deep neural network, thetaQRepresenting critic-eval deep neural network hyperparticipation,representing the critic-target deep neural network super parameter.

The step 5) comprises the following steps:

5.1) State vector s of the Current time Slot of Industrial terminal mmAnd the state vector s 'of the next slot'mInput operator structure, output motion vector amAnd a'mTo obtain a reward rmAnd rm';

5.2) cyclically executing step 5.1) for each industrial terminal, storing each time slot<sm(t),am(t),rm(t)>Obtaining K experiences as experience pools, respectively storing the K experiences into two experience pools according to different weights of the experiences, wherein K is a constant;

5.3) state vector of current time slot of all industrial terminalsMotion vector of current time slotAnd the state vector of the next time slotNext time slot movementAs vectorsInputting criticc structure of industrial terminal m, respectively outputting value function

5.4) updating formula according to reinforced learning BellmanUpdating actor-eval deep neural network hyper-parameter theta by using random gradient descent methodπAnd critic-eval deep neural network hyper-parameter thetaQ

5.5) utilization ofUpdating operator-target deep neural network hyper-parametersBy usingUpdating actor-eval deep neural network hyper-parametersWherein, the lambda is an update factor, and the lambda belongs to [0,1 ]];

5.6) executing the priority weight experience playback, and repeating the steps 5.1) -5.5) until the reward is converged to a stable value, thereby obtaining the trained multi-agent deep reinforcement learning model.

In the step 5.1), a greedy algorithm is adopted to dynamically change the output probability of the motion vector, and the method specifically comprises the following steps:

use ofGreedy method selects output motion vector, where ar(t) denotes a randomly selected motion vector, av(t) represents selecting the motion vector with the largest reward;

the epsilon is (1-delta)Uε0Denotes the probability of selection, wherein ∈0Denotes the initial selection probability, δ denotes the decay rate, and U denotes the number of training.

In the step 5.2), two experience pools are set, the experiences with different weights are stored respectively, and along with the change of the training times of the neural network model, the probability of extracting the experiences in the different experience pools is dynamically changed, specifically:

since different experiences contribute differently to the convergence of the deep neural network, the gradient of descent of each experienceAs a weight of experience;

averaging the weights of any K experiences, i.e.Experience with weights higher than the mean of the weights, i.e.For high-weight experiences, the weight is lower than the weight average, i.e. experiencesLow weight experience;

a, B two experience pools are set, wherein a pool A stores high weight experience, and a pool B stores low weight experience; in the initial training stage, the probability of A, B experience pool random sampling is equal, along with the increase of training times, the sampling probability of the experience pool A is gradually increased, and the sampling probability of the experience pool B is gradually decreased; the sampling probability isWherein g is not less than 0x1 or less represents the sampling probability of A, B empirical pool, g0The initial sampling probability of the empirical pool is shown A, B,representing A, B sample probabilities of an empirical poolThe attenuation value, U, represents the number of training sessions.

In the step 5.4), the actor-eval deep neural network gradient iscritic-eval deep neural network gradient ofWhereinRepresenting the descending gradient of the actor-eval deep neural network,representing the descent gradient of the critic-eval deep neural network, gamma representing the discount rate,representing the mathematical expectation, and pi represents the current strategy of the actor-eval deep neural network.

The step 6) comprises the following steps:

6.1) State vector s of current time slot t of industrial terminal mm(t) as input of actor structure of mth agent finished off-line training, obtaining output motion vector am(t);

6.2) based on the obtained output motion vector am(t) industrial terminal m according to am(t) performing a calculation decision, a transmission power allocation calculation and an energy resource to process an industrial task;

6.3) executing the steps 6.1) to 6.2) on all M industrial terminals in the industrial wireless network to obtain resource allocation results of the M industrial terminals, and processing industrial tasks according to the resource allocation results.

The invention has the following beneficial effects and advantages:

1. the invention aims at the service quality requirements of intensive computation and delay sensitive industrial tasks generated by industrial terminals in an industrial wireless network, establishes cooperative resource allocation at the end edge of the industrial wireless network, solves the problems of difficult modeling and algorithm state space explosion in the traditional method caused by dynamic time-varying characteristics of the industrial wireless network by using a resource allocation algorithm based on multi-agent deep reinforcement learning, and ensures reasonable allocation of energy and computation resources and real-time and efficient processing of the industrial tasks.

2. The method has stronger universality and practicability, can adaptively process the dynamic time-varying characteristics of the industrial wireless network, can effectively realize the resource allocation of the industrial wireless network under the constraint of limited energy and computing resources, and improves the safety and the stability of the system.

Drawings

FIG. 1 is a flow chart of the method of the present invention;

FIG. 2 is a diagram of an edge-side coordinated industrial wireless network model;

FIG. 3 is a diagram of an actor-eval and actor-target deep neural network architecture;

FIG. 4 is a diagram of a critic-eval and critic-target deep neural network architecture;

FIG. 5 is a flow chart of multi-agent deep reinforcement learning training.

Detailed Description

The present invention will be described in further detail with reference to the accompanying drawings and examples.

The invention relates to an industrial wireless network technology, which comprises the following steps: establishing an end edge cooperative industrial wireless network; establishing an optimization problem of the edge resource allocation of the industrial wireless network; establishing a Markov decision model; constructing a resource distribution neural network model by adopting a multi-agent deep reinforcement learning method; training the neural network model off line until the reward converges to a stable value; and based on the offline training result, the industrial wireless network performs resource allocation on line and processes the industrial task. Aiming at the service quality requirements of the industrial tasks which are generated by the industrial terminal in the industrial wireless network and are intensive in calculation and sensitive in time delay, the invention establishes the industrial wireless network model with the cooperative end edge, and invents a resource allocation algorithm based on the multi-agent deep reinforcement learning. The method fully considers the problems of difficult modeling and algorithm state space explosion of the traditional method caused by the dynamic time-varying characteristics of the industrial wireless network, can reasonably distribute energy and computing resources under the condition of meeting the constraints of limited energy and computing resources, and ensures the real-time and efficient processing of industrial tasks.

The invention mainly comprises the following implementation processes, as shown in fig. 1:

1) establishing an end edge cooperative industrial wireless network;

2) establishing an optimization problem of the edge resource allocation of the industrial wireless network;

3) establishing a Markov decision model;

4) adopting multi-agent deep reinforcement learning to construct a resource distribution neural network model;

5) training the neural network model off line until the reward converges to a stable value;

6) and based on the offline training result, the industrial wireless network performs resource allocation on line and processes the industrial task.

The embodiment is implemented according to the flow shown in fig. 1, and the specific steps are as follows:

1. establishing an industrial wireless network model with cooperative end edges, as shown in fig. 2, comprising: n industrial base stations and M industrial terminals; the industrial base station is used for scheduling industrial terminals in a network coverage range and communicating the industrial terminals with the industrial base station; the industrial base station has edge computing capability and can provide computing resources for the industrial terminal; the industrial terminal generates different types of industrial tasks in real time and communicates with the industrial base station through a wireless channel; both the computing resources and the energy of industrial terminals are limited.

2. Establishing an optimization problem of side resource allocation of the industrial wireless network:

s.t.

C1:0≤pm≤P,

wherein the content of the first and second substances,represents the overhead of the system; t ismRepresenting the time delay of the industrial terminal m; emRepresents the energy consumption of the industrial terminal m; ω represents the time delay weight and (1- ω) represents the energy consumption weight;a set of industrial base stations is represented,a collection of industrial terminals is represented that,c1 is the energy constraint of the industrial terminal m, pmRepresenting the transmitting power of the industrial terminal m, and P represents the maximum transmitting power; c2 is a constraint on the computational resources,indicating the computing resources, F, allocated to an industrial terminal m by an industrial base station nnThe maximum computing resource of the industrial base station n is represented, and the sum of the computing resources obtained by the industrial terminals unloaded to the industrial base station n does not exceed the maximum computing resource of the industrial base station n; c3 is a calculation resource constraint, and the calculation resource obtained by the industrial terminal m unloaded to the industrial base station n must not exceed the maximum calculation resource of the industrial base station n; c4 is the calculation of decision constraints,representing the computational decision of the industrial terminal m, which can only choose to process industrial tasks locally, i.e.Or off-load industrial tasks to an industrial base station n, i.e.C5 is a computational decision constraint that can only be offloaded to a set of industrial base stations if industrial terminal m offloads an industrial taskAn industrial base station.

3. Establishing a Markov decision model, wherein the specific meanings of the state vector, the action vector, the reward vector and the transition probability are as follows:

(1) the state vector of the industrial terminal m in the time slot t isWhereinRepresenting the computational decision of the industrial terminal m at the beginning of the time slot t,it is meant that the industrial task is handled locally,indicating the unloading of industrial tasks to an industrial base station n; dm(t) represents the data size of the industrial task generated by the industrial terminal m in the time slot t; c. Cm(t) represents the required computational resources for the industrial task generated by the industrial terminal m at the time slot t;the distance between the industrial terminal m and all industrial base stations at the time slot t is represented;

(2) the motion vector of the industrial terminal m in the time slot t is am(t)={am,o(t),am,p(t) }, in which am,o(t),am,o(t) is an element {0,1, N } representing the calculation decision of the industrial terminal m at the end of the time slot t, am,o(t) ═ 0 denotes local processing of industrial tasks, am,o(t) ═ n denotes offloading of industrial tasks to industrial base station n; a ism,p(t),am,p(t) e {0,1, P } represents the transmit power of industrial terminal m at the end of time slot t, am,p(t) '0' denotes the local processing of the industrial task, am,p(t) ═ p denotes offloading of industrial tasks at a transmission power p;

(3) the reward vector of the industrial terminal m in the time slot t is rm(t)={rm,d(t),rm,e(t) }, in which rm,d(t) represents the delay reward of the industrial terminal m at the time slot t,which represents the total latency of the local processing,representing the total time delay for offloading to the industrial base station n process; r ism,e(t) represents an energy consumption reward of the industrial terminal m at the time slot t,which represents the total energy consumption of the local process,represents the total energy consumption of the process offloaded to the industrial base station n;

(4) in time slot t, with probability f between state vectorsm(t) effecting a transition, optimization of transition probability by maximizing long-term jackpot, i.e.Wherein f ism(t)*The probability of the optimum transition is represented,for the system long-term jackpot, gamma denotes the discount rate, tauIndicating a time slot; r ism(t)=ωrm,d(t)+(1-ω)rm,e(t) the overhead of both latency and energy is considered.

4. Adopting multi-agent deep reinforcement learning to construct a resource distribution neural network model, as shown in fig. 3 and 4:

(1) each industrial terminal is an intelligent agent and consists of an actor structure and a critic structure;

(2) initializing the operator-even deep neural network and the operator-target deep neural network model parameters,wherein the content of the first and second substances,representing the neuron number of the input layer of the operator-even deep neural network and the operator-target deep neural network,representing the number of neuron of hidden layer of operator-even deep neural network and operator-target deep neural network,expressing the neuron number of the output layer of the operator-even deep neural network and the operator-target deep neural network, thetaπRepresenting the actor-eval deep neural network hyper-parameter,representing an operator-target deep neural network hyper-parameter;

(3) initializing a critic-even deep neural network and critic-target deep neural network model parameters,wherein the content of the first and second substances,representing critic-eval deep neural network and critic-target deep neural network input layer spiritThe number of the warp elements is equal to the number of the warp elements,representing the number of hidden layer neurons of the critic-eval deep neural network and the critic-target deep neural network,representing the number of neuron of output layer of critic-eval deep neural network and critic-target deep neural network, thetaQRepresenting critic-eval deep neural network hyperparticipation,representing the critic-target deep neural network super parameter.

5. Training the neural network model offline until the reward converges to a stable value, as shown in fig. 5, the specific steps are as follows:

(1) state vector s of m time slots t of industrial terminalm(t) inputting the operator-eval deep neural network and outputting the motion vector am(t) earning a prize rm(t), transition to the next state vector sm(t+1);

The greedy algorithm is adopted to dynamically change the output probability of the motion vector, and the method specifically comprises the following steps:

use ofGreedy method selects output motion vector, where ar(t) denotes a randomly selected motion vector, av(t) represents selecting the motion vector with the largest reward;

the epsilon is (1-delta)Uε0Denotes the probability of selection, where ∈0Representing the initial selection probability, delta represents the decay rate,Uindicating the number of training sessions.

(2) State vector s of industrial terminal m time slot t +1m(t +1) inputting operator-target deep neural network and outputting motion vector am(t +1), the prize r is earnedm(t+1);

(3) Of each time slot<sm(t),am(t),rm(t)>As experience, circularly executing the steps (1) to (2) for each industrial terminal to obtain K experiences, and respectively storing the K experiences into two experience pools according to different weights of the experiences;

(4) inputting state vectors S and action vectors A of all industrial terminals in time slots t into critic-eval deep neural network to obtain value functions Qm(S, A); inputting state vectors S 'and motion vectors A' of all industrial terminals in time slot t +1 into critic-target deep neural network to obtain value function Qm(S',A');

(5) Bellman update formula based on reinforcement learningUpdating actor-eval deep neural network hyper-parameter theta by using random gradient descent methodπAnd critic-eval deep neural network hyper-parameter thetaQ. Wherein the actor-eval deep neural network gradient iscritic-eval deep neural network gradient ofWhereinRepresenting the descending gradient of the actor-eval deep neural network,representing the descent gradient of the critic-eval deep neural network, gamma representing the discount rate,representing a mathematical expectation, and pi represents the current strategy of an actor-eval deep neural network;

(6) by usingAndupdating operator-target deep neural network hyper-parametersWith actor-eval deep neural network super parameterWherein, the lambda is an update factor, and the lambda belongs to [0,1 ]]。

(7) Performing priority weight experience playback, and repeating the iteration steps (1) - (6) until the reward is converged to a stable value to obtain a trained multi-agent deep reinforcement learning model;

the priority weight experience playback specifically includes setting two experience pools, and respectively storing experiences with different weights. With the change of the training times of the neural network model, the probability of extracting experience in different experience pools is dynamically changed, and the method specifically comprises the following steps:

considering that different experiences have different contributions to the convergence of the deep neural network, the descending gradient of each experience isAs a weight of experience;

averaging the weights of any K experiences, i.e.Experience with weights higher than the mean of the weights, i.e.For high-weight experiences, the weight is lower than the weight average, i.e. experiencesLow weight experience;

a, B two experience pools are set, wherein a pool A stores high weight experience, and a pool B stores low weight experience; in the initial training stage, A, B experience pools have equal random sampling experience probabilities, as the training times are increased, the sampling probability of the experience pool A is gradually increased, and the sampling probability of the experience pool B is gradually decreased(ii) a The sampling probability isWherein 0 is not more than gx1 or less represents the sampling probability of A, B empirical pool, g0The initial sampling probability of the empirical pool is shown A, B,the sampled probability decay values of the empirical pool are represented A, B. .

6. Based on the off-line training result, the industrial wireless network performs resource allocation on line and processes industrial tasks, and the method comprises the following steps:

(1) the state vector s of the current time slot t of the industrial terminal mm(t) as input of actor structure of mth agent finished off-line training, obtaining output motion vector am(t);

(2) According to the obtained output motion vector, the industrial terminal m is according to am(t) performing a calculation decision, a transmission power allocation calculation and an energy resource to process an industrial task;

(3) and (3) executing the steps (1) to (2) on all M industrial terminals in the industrial wireless network to obtain resource allocation results of the M industrial terminals, and processing industrial tasks according to the resource allocation results.

17页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:网络资源控制的方法及设备

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!