SDN-oriented energy-saving routing method based on Q learning

文档序号:1508526 发布日期:2020-02-07 浏览:25次 中文

阅读说明:本技术 一种面向sdn网络基于q学习的节能路由方法 (SDN-oriented energy-saving routing method based on Q learning ) 是由 龙恳 吴翰 蒋明均 李伟 于 2019-11-05 设计创作,主要内容包括:本发明涉及一种面向SDN网络基于Q学习的节能路由方法,属于网络节能技术领域。核心方法包括:统计当前时刻网络流需求,根据源节点和目标节点不同存入任务队列;为网络流的每个目标节点构建一张Q表并初始化;从队列中取出元素,根据目的节点不同输入对应的Q表,输出下一跳节点位置并得到奖励,更新Q表;反复迭代更新Q表;利用训练好的Q表处理网络流需求。特点:SDN控制器直接为每个网络流提供路由路径,动态调整路由器和链路的开关闭状态使网络在满足流量需求条件下达到能效最优。(The invention relates to an SDN-oriented energy-saving routing method based on Q learning, and belongs to the technical field of network energy saving. The core method comprises the following steps: counting the network flow requirements at the current moment, and storing the network flow requirements into a task queue according to the difference between a source node and a target node; establishing a Q table for each target node of the network flow and initializing; taking out the elements from the queue, inputting corresponding Q tables according to different destination nodes, outputting the position of the next hop node and obtaining reward, and updating the Q tables; repeatedly and iteratively updating the Q table; and processing the network flow requirement by using the trained Q table. The method is characterized in that: the SDN controller directly provides a routing path for each network flow, and the switch closing state of the router and the link is dynamically adjusted to enable the network to achieve optimal energy efficiency under the condition of meeting flow requirements.)

1. An SDN-oriented energy-saving routing method based on Q learning is characterized in that: the method comprises the following steps:

s1: acquiring network flow information, and storing the network flow information into a task queue according to the difference between a source node and a target node;

s2: establishing a Q table for each target node of the network flow and initializing;

s3: sequentially taking out elements from the queue, inputting the elements into a corresponding Q table, and outputting a next jump position;

s4: obtaining different reward values according to the next hop position and the current network state, and updating the Q table according to the different reward values;

s5: repeating the processes of S3 and S4 until all tasks in the queue are completed, namely completing one training;

s6: resetting the queue and the network state, and repeating the processes from S3 to S5 to finish multiple times of training to obtain a final Q table;

s7: and inputting the network flow to be processed into the corresponding Q table, stepping to obtain a routing path, and simultaneously comparing the final network state with the initial state to obtain a network node link switching-on and switching-off strategy.

Technical Field

The invention belongs to the technical field of network energy conservation, and relates to an SDN-oriented energy-saving routing method based on Q learning.

Background

With the development of cloud computing technology, not only the network scale shows explosive growth, but also the network data continuously grows, so that the energy consumption of the data center is increased sharply. Furthermore, to meet service requirements, providing high performance and high reliability routing, most new data centers use rich network topologies such as Fat-Tree, BCube, etc., which will introduce many switching devices. This approach allows better network performance but introduces a significant amount of network energy consumption. Modem large-scale data center networks have the characteristics of millions of servers and dense bandwidth, and flexible network and network resource scheduling requirements are difficult to realize in the traditional networks. SDN (software defined networking) arises accordingly, which decouples the control plane and data plane of traditional network devices, can provide high scalability, flexible resource scheduling, high bandwidth, and other network requirements.

With the aid of the real-time acquisition capability of the SDN control plane for the full-network information and the control power for the routing nodes, an adaptive energy-saving routing algorithm (application publication No.: CN106161257A) oriented to the SDN network and based on the link utilization rate is proposed to reset the path for the path whose length exceeds the threshold value by acquiring the global topology information and the bandwidth utilization rate in real time. These methods have too large computation amount to process the real routing problem in real time.

Disclosure of Invention

In view of this, the present invention provides an energy-saving routing method based on Q learning for an SDN network.

In order to achieve the purpose, the invention provides the following technical scheme:

an SDN-oriented energy-saving routing method based on Q learning comprises the following steps:

s1: acquiring network flow information, and storing the network flow information into a task queue according to the difference between a source node and a target node;

s2: establishing a Q table for each target node of the network flow and initializing;

s3: sequentially taking out elements from the queue, inputting the elements into a corresponding Q table, and outputting a next jump position;

s4: obtaining different reward values according to the next hop position and the current network state, and updating the Q table according to the different reward values;

s5: repeating the processes of S3 and S4 until all tasks in the queue are completed, namely completing one training;

s6: resetting the queue and the network state, and repeating the processes from S3 to S5 to finish multiple times of training to obtain a final Q table;

s7: and inputting the network flow to be processed into the corresponding Q table, stepping to obtain a routing path, and simultaneously comparing the final network state with the initial state to obtain a network node link switching-on and switching-off strategy.

The invention has the beneficial effects that:

1. the network state is analyzed globally, and each network flow is provided with a routing path in real time, so that the congestion problem caused by traditional routing protocols such as OSPF (open shortest path first) and the like can be solved.

2. The Q learning can be used for determining the switch-off node and link strategy in real time so as to ensure that the system achieves the lowest energy consumption.

3. An MST tree is generated for the entire network, and the algorithm is enabled only when the flow demand exceeds a threshold flow, thereby balancing algorithm overhead and network energy efficiency.

Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objectives and other advantages of the invention may be realized and attained by the means of the instrumentalities and combinations particularly pointed out hereinafter.

Drawings

For the purposes of promoting a better understanding of the objects, aspects and advantages of the invention, reference will now be made to the following detailed description taken in conjunction with the accompanying drawings in which:

fig. 1 is an overall flowchart of the energy-saving routing method based on reinforcement learning according to the present invention.

Detailed Description

The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention in a schematic way, and the features in the following embodiments and examples may be combined with each other without conflict.

Wherein the showings are for the purpose of illustrating the invention only and not for the purpose of limiting the same, and in which there is shown by way of illustration only and not in the drawings in which there is no intention to limit the invention thereto; to better illustrate the embodiments of the present invention, some parts of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product; it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.

The same or similar reference numerals in the drawings of the embodiments of the present invention correspond to the same or similar components; in the description of the present invention, it should be understood that if there is an orientation or positional relationship indicated by terms such as "upper", "lower", "left", "right", "front", "rear", etc., based on the orientation or positional relationship shown in the drawings, it is only for convenience of description and simplification of description, but it is not an indication or suggestion that the referred device or element must have a specific orientation, be constructed in a specific orientation, and be operated, and therefore, the terms describing the positional relationship in the drawings are only used for illustrative purposes, and are not to be construed as limiting the present invention, and the specific meaning of the terms may be understood by those skilled in the art according to specific situations.

The SDN controller can acquire a full network state, generate a Minimum Spanning Tree (MST) of a full network, wherein an edge with the minimum capacity is used as a threshold value of the MST, analyze the flow size when network flow arrives, and if the flow size does not exceed the threshold value, an algorithm is not started, and if the flow size exceeds the threshold value, the algorithm is started.

The implementation of the algorithm needs to obtain an adjacent matrix A of an undirected graph G represented by a network in real time, the horizontal and vertical axes of the adjacent matrix A represent all nodes of the network, matrix values represent the edge capacity of nodes corresponding to the horizontal and vertical axes, the matrix values can be dynamically changed when network traffic passes, and the initial values are marked as A0. Simultaneously, a Q table is generated for each target node, and according to a real-time adjacency matrix A and a flow task E (s, d, Q, c), wherein s represents an initial node, d represents a target node, and Q represents flow requirementAnd solving that c represents the node where the current flow task is located, the initial value is equal to s, the next hop position of the flow task is output, and the Q table is updated.

S1: reading a network flow to be processed at the current moment, dividing the network flow into different tasks according to the difference of an initial node s and a target node d, storing the tasks into a task queue T, recording the length of the T as L (L >0), and setting iteration times N;

s2: judging whether L is 0, if so, jumping to the step 5, otherwise, taking out a task E (s, d, q, c) from the queue head;

s3: selecting a corresponding Q table according to the destination node d of the E, and finding a corresponding maximum Q value node in the Q table as a next hop node i according to the current position node c of the task;

s4: obtaining different reward values according to the next hop position i and the adjacent matrix A of the current network, and updating a Q table according to the different reward values;

s41: the next hop position is the destination node (i.e., i ═ d), the task is completed, L ═ L-1, and T is dequeued, the reward is returned

r=ra

S42: the next hop is an unreachable node, then initialize T (i.e., c ═ s), and hang T at the end of the task queue, return the reward r ═ rb

S43: when the next hop is not opened in the current network G, comparing the current A with the initial value A0If all the values in i row and i column are equal, c is set to i and the reward r is returned to rc

S44: when the next hop node is turned on but the connection between the current location and the next hop is not enabled, i.e. a (c, i) ═ a0(c, i), setting c to i, and returning reward r to rd;(ra>0>rb>rc>rd)

Update the Q value with Bellman equation:

NewQ(s,a)=Q(s,a)+α[r+γmaxa′Q(s′,a′)-Q(s,a)]

where α is the learning rate and γ is the rewarding decay coefficient.

The task queue is randomly shuffled and a jump is made to S2.

S5: and after all the tasks in the queue T are completed, judging whether N is 0 or not, if so, finishing the training, otherwise, resetting the task queue T and the network state A, and jumping to execute S2.

S6: and acquiring network flow information, storing the network flow information into a task queue, and creating a linked list array R recording result.

S7: taking out tasks from the head of the queue in sequence, searching a corresponding Q table according to different destination nodes to obtain a next hop position, modifying the current task state and the network state if the next hop position is reached, hanging the tasks at the tail end of the queue, and adding records into R; if not, the reset task is placed at the end of the queue.

S8: the process S7 is repeated until all the tasks in all the queues are completed to obtain the routing policy result R and the final network state afComparison A0And AfAnd obtaining the result of the node link on-off strategy.

Finally, the above embodiments are only intended to illustrate the technical solutions of the present invention and not to limit the present invention, and although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions, and all of them should be covered by the claims of the present invention.

6页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种管理FPGA异构加速卡集群的方法、设备及介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!