Network control method and equipment

文档序号:1864698 发布日期:2021-11-19 浏览:27次 中文

阅读说明:本技术 网络控制方法及设备 (Network control method and equipment ) 是由 王凤华 徐晖 侯云静 覃晨 于 2020-05-15 设计创作,主要内容包括:本发明实施例提供一种网络控制方法及设备,该方法包括:向控制设备发送所述网络节点的工作状态参数,以使所述控制设备根据所述网络节点的工作状态参数,对网络拓扑和资源视图进行更新。在本发明实施例中,通过集中控制可以清楚了解全网的拓扑和资源情况,可以做出更合理的路径和资源预留决策。(The embodiment of the invention provides a network control method and equipment, wherein the method comprises the following steps: and sending the working state parameters of the network nodes to control equipment so that the control equipment updates the network topology and the resource view according to the working state parameters of the network nodes. In the embodiment of the invention, the topology and the resource condition of the whole network can be clearly known through centralized control, and more reasonable path and resource reservation decisions can be made.)

1. A network control method is applied to a network node, and is characterized by comprising the following steps:

and sending the working state parameters of the network nodes to control equipment so that the control equipment updates the network topology and the resource view according to the working state parameters of the network nodes.

2. The method of claim 1, wherein the operating state parameters include one or more of: a network device type; an inherent bandwidth; the bandwidth can be allocated; best effort bandwidth; allocated bandwidth; remaining allocated bandwidth; an intrinsic buffer region; an allocable buffer; a best effort buffer; an allocated buffer; the remaining allocated buffers.

3. The method of claim 1, wherein sending the operational status parameters of the network node to a control device comprises:

and sending the working state parameters of the network nodes to the control equipment through periodic heartbeat messages.

4. The method of claim 1, further comprising:

after receiving the flow table from the control device, updating the flow table according to the service level of the data flow, and inserting or deleting a forwarding path of the data flow in the flow table of the relevant level to obtain an execution result of the hierarchical flow table;

notifying the control device of an execution result of the hierarchical flow table.

5. The method of claim 1, further comprising:

after receiving the resource reservation information from the control device, reserving or canceling the resource according to the stream identifier to obtain an execution result of the resource reservation;

and notifying the execution result of the resource reservation to the control equipment.

6. The method according to any one of claims 1 to 5, further comprising:

after receiving data flow from data source equipment, selecting a flow table according to the grade of the data flow, and matching;

performing resource reservation on the network node according to the flow identification of the data flow.

7. The method of claim 6, wherein before selecting a flow table according to the rank of the data flow and performing a match, the method further comprises:

judging whether the data stream needs to be copied or not according to the stream identification and/or the stream type of the data stream;

if the data flow needs to be copied, copying each group of the data flow to form a plurality of data flows, and transferring the data flows into a flow table for matching;

and if the copying is not needed, directly switching to flow table matching.

8. The method according to claim 6 or 7, characterized in that the method further comprises:

judging whether the network node is a last hop or not;

if the network node is the last hop, analyzing whether the network node is a repeated packet according to the packet sequence number in the flow identifier, and if the network node is the repeated packet, deleting the repeated packet;

analyzing the arrival time of the data stream according to the stream type, and setting a sending timer according to the timestamp;

and if the sending timer is up, sending the data stream to a next hop.

9. A network control method is applied to control equipment and is characterized by comprising the following steps:

acquiring working state parameters of network nodes;

and updating the network topology and the resource view according to the working state parameters of the network nodes.

10. The method of claim 9, wherein the operating state parameters include one or more of: a network device type; an inherent bandwidth; the bandwidth can be allocated; best effort bandwidth; allocated bandwidth; remaining allocated bandwidth; an intrinsic buffer region; an allocable buffer; a best effort buffer; an allocated buffer; the remaining allocated buffers.

11. The method of claim 9, wherein the obtaining the operating state parameters of the network node comprises:

and receiving a periodic heartbeat message sent by the network node, wherein the periodic heartbeat message carries the working state parameter of the network node.

12. The method of claim 9, further comprising:

receiving a first message from an application device, the first message requesting service parsing;

generating a flow table according to the first message;

sending the flow table to the network node.

13. The method of claim 12, wherein the first message comprises one or more of: the method comprises the steps of information of a source end, information of a destination end, information of a data stream, a service application type and a service application category identification.

14. The method of claim 12, wherein generating a flow table from the first message comprises:

the service analysis module identifies the service type applied by the application equipment according to the first message;

if the applied service type is an applied resource, the service analysis module sends a second message to the path calculation module;

the path calculation module acquires a network topology and a resource view and reserved resources of the network nodes from the topology management module according to the second message;

the path calculation module performs path calculation according to the network topology and the resource view and the reserved resources of the network nodes, and estimates the end-to-end delay of each path;

the path calculation module sends a path set smaller than the maximum delay of the data stream to the resource calculation module;

the resource calculation module acquires network topology and resource views and reserved resources of network nodes from the topology management module, performs resource estimation on the paths in the path set, selects paths meeting the resource requirements from the paths, and sends the information of the paths to the flow table generation module;

and the flow table generating module generates a flow table according to the information of the path.

15. The method of claim 14, further comprising:

if the path which meets the resource requirement does not exist, the path calculation module informs the service analysis module of the result;

and the service analysis module feeds the result back to the application equipment.

16. The method of claim 15, further comprising:

the service analysis module receives a third message from the application device, wherein the third message indicates bearer revocation, and the third message carries a data stream identifier;

the service analysis module informs a topology management module to release resources related to the data stream identification, and updates a network topology and a resource view;

the topology management module notifies the flow table generation module to delete the flow table entry related to the data flow identification.

17. The method of claim 14, wherein the path computation module sends a set of paths less than a maximum delay of the data flow to the resource computation module, comprising:

the path computation module determines a set of paths that are less than a maximum delay of a data stream;

the path computation module determines a difference between a delay of each path in the set of paths and a maximum delay of a data stream;

and the path calculation module is sequenced from small to large according to the difference value and sends the difference value to the resource calculation module.

18. The method of claim 17, wherein the traffic parsing module sends a second message to the path computation module, comprising:

the service analysis module maps the service application type identification into one or more items of service peak value packet speed, data packet maximum length, end-to-end delay upper limit, packet loss upper limit and network bandwidth according to the established service model base, and sends the one or more items of the same source end, the same destination end, the data stream identification, the service application type and the service application type identification to the path calculation module.

19. A network node, comprising:

and the sending module is used for sending the working state parameters of the network nodes to the control equipment so that the control equipment updates the network topology and the resource view according to the working state parameters of the network nodes.

20. A network node, comprising: a first transceiver and a first processor;

the first transceiver transmits and receives data under control of the first processor;

the first processor reads a program in a memory to perform the following operations: and sending the working state parameters of the network nodes to control equipment so that the control equipment updates the network topology and the resource view according to the working state parameters of the network nodes.

21. A control apparatus, characterized by comprising:

the acquisition module is used for acquiring the working state parameters of the network nodes;

and the updating module is used for updating the network topology and the resource view according to the working state parameters of the network nodes.

22. A control apparatus, characterized by comprising: a second transceiver and a second processor;

the second transceiver transmits and receives data under the control of the second processor;

the second processor reads a program in the memory to perform the following operations: acquiring working state parameters of network nodes; and updating the network topology and the resource view according to the working state parameters of the network nodes.

23. A communication device, comprising: processor, memory and program stored on the memory and executable on the processor, which when executed by the processor implements steps comprising a network control method as claimed in any one of claims 1 to 18.

24. A readable storage medium, characterized in that the readable storage medium has stored thereon a program which, when being executed by a processor, carries out steps comprising the network control method according to any one of claims 1 to 18.

Technical Field

The embodiment of the invention relates to the technical field of communication, in particular to a network control method and equipment.

Background

The integral architecture, data platform specification, data traffic information model and YANG model which are currently concerned by The DetNet working group of The Internet Engineering Task Force (IETF) are provided; however, no new specification is proposed for network control, and only the relevant architecture and control of SDN in IETF RFC7426 are followed. The control plane collects the topology of the network system, and the management plane monitors the fault and real-time information of the network equipment. The control plane performs path calculation according to the topology of the network system and the information of the management plane to generate a flow table, and the whole process does not consider the occupation of resources, so that the performances of determinacy such as zero packet loss, zero jitter, low delay and the like cannot be guaranteed.

Disclosure of Invention

An object of the embodiments of the present invention is to provide a network control method and device, which solve the problem that deterministic performance such as zero packet loss, zero jitter, and low delay cannot be guaranteed because resource occupation is not considered.

The embodiment of the invention provides a network control method, which is applied to a network node and comprises the following steps:

and sending the working state parameters of the network nodes to control equipment so that the control equipment updates the network topology and the resource view according to the working state parameters of the network nodes.

Optionally, the operating state parameters include one or more of: a network device type; an inherent bandwidth; the bandwidth can be allocated; best effort bandwidth; allocated bandwidth; remaining allocated bandwidth; an intrinsic buffer region; an allocable buffer; a best effort buffer; an allocated buffer; the remaining allocated buffers.

Optionally, sending the operating state parameter of the network node to a control device includes:

and sending the working state parameters of the network nodes to the control equipment through periodic heartbeat messages.

Optionally, the method further comprises:

after receiving the flow table from the control device, updating the flow table according to the service level of the data flow, and inserting or deleting a forwarding path of the data flow in the flow table of the relevant level to obtain an execution result of the hierarchical flow table;

notifying the control device of an execution result of the hierarchical flow table.

Optionally, the method further comprises:

after receiving the resource reservation information from the control device, reserving or canceling the resource according to the stream identifier to obtain an execution result of the resource reservation;

and notifying the execution result of the resource reservation to the control equipment.

Optionally, the method further comprises:

after receiving data flow from data source equipment, selecting a flow table according to the grade of the data flow, and matching;

performing resource reservation on the network node according to the flow identification of the data flow.

Optionally, before selecting a flow table according to the level of the data flow and performing matching, the method further includes:

judging whether the data stream needs to be copied or not according to the stream identification and/or the stream type of the data stream;

if the data flow needs to be copied, copying each group of the data flow to form a plurality of data flows, and transferring the data flows into a flow table for matching;

and if the copying is not needed, directly switching to flow table matching.

Optionally, the method further comprises:

judging whether the network node is a last hop or not;

if the network node is the last hop, analyzing whether the network node is a repeated packet according to the packet sequence number in the flow identifier, and if the network node is the repeated packet, deleting the repeated packet;

analyzing the arrival time of the data stream according to the stream type, and setting a sending timer according to the timestamp;

and if the sending timer is up, sending the data stream to a next hop.

In a second aspect, an embodiment of the present invention provides a network control method, applied to a control device, including:

acquiring working state parameters of network nodes;

and updating the network topology and the resource view according to the working state parameters of the network nodes.

Optionally, the operating state parameters include one or more of: a network device type; an inherent bandwidth; the bandwidth can be allocated; best effort bandwidth; allocated bandwidth; remaining allocated bandwidth; an intrinsic buffer region; an allocable buffer; a best effort buffer; an allocated buffer; the remaining allocated buffers.

Optionally, the acquiring the operating state parameter of the network node includes:

and receiving a periodic heartbeat message sent by the network node, wherein the periodic heartbeat message carries the working state parameter of the network node.

Optionally, the method further comprises:

receiving a first message from an application device, the first message requesting service parsing;

generating a flow table according to the first message;

sending the flow table to the network node.

Optionally, the first message includes one or more of: the method comprises the steps of information of a source end, information of a destination end, information of a data stream, a service application type and a service application category identification.

Optionally, generating a flow table according to the first message includes:

the service analysis module identifies the service type applied by the application equipment according to the first message;

if the applied service type is an applied resource, the service analysis module sends a second message to the path calculation module;

the path calculation module acquires a network topology and a resource view and reserved resources of the network nodes from the topology management module according to the second message;

the path calculation module performs path calculation according to the network topology and the resource view and the reserved resources of the network nodes, and estimates the end-to-end delay of each path;

the path calculation module sends a path set smaller than the maximum delay of the data stream to the resource calculation module;

the resource calculation module acquires network topology and resource views and reserved resources of network nodes from the topology management module, performs resource estimation on the paths in the path set, selects paths meeting the resource requirements from the paths, and sends the information of the paths to the flow table generation module;

and the flow table generating module generates a flow table according to the information of the path.

Optionally, the method further comprises:

if the path which meets the resource requirement does not exist, the path calculation module informs the service analysis module of the result;

and the service analysis module feeds the result back to the application equipment.

Optionally, the method further comprises:

the service analysis module receives a third message from the application device, wherein the third message indicates bearer revocation, and the third message carries a data stream identifier;

the service analysis module informs a topology management module to release resources related to the data stream identification, and updates a network topology and a resource view;

the topology management module notifies the flow table generation module to delete the flow table entry related to the data flow identification.

Optionally, the sending, by the path computation module, the set of paths smaller than the maximum delay of the data stream to the resource computation module includes:

the path computation module determines a set of paths that are less than a maximum delay of a data stream;

the path computation module determines a difference between a delay of each path in the set of paths and a maximum delay of a data stream;

and the path calculation module is sequenced from small to large according to the difference value and sends the difference value to the resource calculation module.

Optionally, the sending, by the service analysis module, the second message to the path computation module includes:

the service analysis module maps the service application type identification into one or more items of service peak value packet speed, data packet maximum length, end-to-end delay upper limit, packet loss upper limit and network bandwidth according to the established service model base, and sends the one or more items of the same source end, the same destination end, the data stream identification, the service application type and the service application type identification to the path calculation module.

In a third aspect, an embodiment of the present invention provides a network node, including:

and the sending module is used for sending the working state parameters of the network nodes to the control equipment so that the control equipment updates the network topology and the resource view according to the working state parameters of the network nodes.

In a fourth aspect, an embodiment of the present invention provides a network node, including: a first transceiver and a first processor;

the first transceiver transmits and receives data under control of the first processor;

the first processor reads a program in a memory to perform the following operations: and sending the working state parameters of the network nodes to control equipment so that the control equipment updates the network topology and the resource view according to the working state parameters of the network nodes.

In a fifth aspect, an embodiment of the present invention provides a control apparatus, including:

the acquisition module is used for acquiring the working state parameters of the network nodes;

and the updating module is used for updating the network topology and the resource view according to the working state parameters of the network nodes.

In a sixth aspect, an embodiment of the present invention provides a control apparatus, including: a second transceiver and a second processor;

the second transceiver transmits and receives data under the control of the second processor;

the second processor reads a program in the memory to perform the following operations: acquiring working state parameters of network nodes; and updating the network topology and the resource view according to the working state parameters of the network nodes.

In a seventh aspect, an embodiment of the present invention provides a communication device, including: a processor, a memory and a program stored on the memory and executable on the processor, which program, when executed by the processor, performs steps comprising a network control method as set forth in the first or second aspect.

In an eighth aspect, the embodiment of the present invention provides a readable storage medium, on which a program is stored, and the program, when executed by a processor, implements the steps of the network control method according to the first aspect or the second aspect.

In the embodiment of the invention, the topology and the resource condition of the whole network can be clearly known through centralized control, and more reasonable path and resource reservation decisions can be made.

Drawings

Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:

figure 1 is a SDN architecture diagram;

FIG. 2 is a schematic diagram of a TSN within the IEEE802.1 standard framework;

FIG. 3 is a flowchart of a network control method according to an embodiment of the present invention;

FIG. 4 is a second flowchart of a network control method according to an embodiment of the present invention;

FIG. 5 is a system architecture diagram according to an embodiment of the present invention;

FIG. 6 is a schematic diagram of a network management process according to an embodiment of the present invention;

FIG. 7 is a schematic diagram of a network control flow according to an embodiment of the present invention;

FIG. 8 is a schematic diagram of a resource reservation process according to an embodiment of the present invention;

FIG. 9 is a flow chart illustrating data processing according to an embodiment of the present invention;

fig. 10 is a schematic diagram of a network node according to an embodiment of the present invention;

fig. 11 is a second schematic diagram of a network node according to the second embodiment of the present invention;

FIG. 12 is one of the schematic diagrams of a control apparatus according to an embodiment of the present invention;

FIG. 13 is a second schematic diagram of a control apparatus according to an embodiment of the present invention;

fig. 14 is a schematic diagram of a communication device according to an embodiment of the present invention.

Detailed Description

In order to facilitate understanding of the embodiments of the present invention, the following technical points are introduced below:

1) Time-Sensitive Networking (TSN):

the TSN provides distributed time synchronization and deterministic communication using standard ethernet. The nature of standard ethernet is a non-deterministic network, but in the industry it is necessary to require determinism, and a set of packets must arrive at their destination in complete, real-time, deterministic fashion. Therefore, the new TSN standard maintains time synchronization of all network devices, and adopts central control to perform time slot planning, reservation and fault-tolerant protection at the data link layer to achieve certainty. The TSN includes three basic components: time synchronization, communication path selection, reservation and fault tolerance, scheduling and traffic shaping.

Time synchronization: the time in the TSN network is transmitted to the Ethernet equipment from a central time source through the network, and the high-precision synchronization of the network equipment and the time of the central clock source is kept through the high-frequency round-trip delay measurement. Namely IEEE 1588.

Communication path selection, reservation and fault tolerance: the TSN computes paths through the network based on the network topology and provides explicit path control and bandwidth reservation for the data streamsAnd provide redundant transmission for the data stream according to the network topology.

Scheduling and traffic shaping: TSN Time-Aware queues TSN switches are enabled to control queued Traffic (TAS) by Time-Aware shapers (TAS), ethernet frames are identified and assigned to priority-based Virtual Local Area Network (VLAN) tags (Tag), each queue is defined in a schedule, and these data queue messages are then transmitted out of the egress for a predetermined Time window. The other queues will be locked within the specified time window. Thus eliminating the effect of periodic data being affected by non-periodic data. This means that the delay of each switch is deterministic and knowable. And the data message delay in the TSN network is guaranteed.

2) Deterministic network (DetNet):

the DetNet network aims at the second layer bridging and the third layer routing segment implementation to determine transmission paths which can provide worst-case bounds for delay, packet loss and jitter, and control and reduce the end-to-end delay. DetNet extends the technological extension developed by TSN from the data link layer to the route.

The integral architecture, data platform specification, data traffic information model and YANG model which are currently concerned by The DetNet working group of The Internet Engineering Task Force (IETF) are provided; however, no new specification is provided for Network control, and only the control method of Software Defined Network (SDN) in IETF RFC7426 is adopted.

Referring to fig. 1, an SDN architecture is illustrated, and relevant modules and interworking principles are explained herein. The SDN divides the network into different planes according to service functions, and the planes from top to bottom are introduced as follows:

application plane (Application P)lane): the plane in which the applications and services defining the behavior of the network lie.

Control Plane (Control Plane): and determining how to forward the data packet by one or more network devices, and sending the determinations to the network devices in a flow table manner for execution. Here the control plane interacts mainly with the forwarding plane, with less attention being paid to the operation plane of the device unless the control plane wants to know the current state and functionality of a particular port.

Management Plane (Management Plane): is responsible for monitoring, configuring and maintaining the network devices, e.g., making decisions on the status of the network devices. The management plane mainly interacts with the operation plane of the device.

Forwarding Plane (Forwarding Plane): the network device is responsible for processing the functional modules of the data packets in the data path according to instructions received from the control plane. Operations of the forwarding plane include, but are not limited to, forwarding, dropping, and altering packets.

Operation Plane (Operational Plane): the operation plane is responsible for managing the operational state of the network device in which it is located, e.g., whether the device is in an active state or an inactive state, the number of available ports, the state of each port, etc. The operation plane is responsible for network device resources such as ports, memory, etc.

Therefore, the original SDN network receives a data packet request to be forwarded from an application plane or a forwarding plane, and the control plane performs routing calculation according to the formed network topology, generates a flow table, and issues the flow table to the forwarding plane of the device. The specific working principle of the forwarding plane is as follows:

matching the flow table: the header field is used as a matching field and includes an ingress port, a Media Access Control (MAC), a virtual local area network id (vlan id), an Internet Protocol (IP) address, and so on. And sequentially matching the entries of the locally stored flow table according to the priority, and taking the matching entry with the highest priority as a matching result. The multi-stage flow table can reduce the overhead, extract the flow table characteristics, decompose the matching process into a plurality of steps, form a pipeline processing form and reduce the number of records of the flow table. The forwarding rules are organized in different flow tables. And matching the rules in the same flow table according to the priority. According to the method, the statistical data are updated according to the jump from small to large in sequence, and after the operation is finished, the instruction set multi-flow table pipeline processing architecture is modified and executed, so that the number of flow table items can be reduced, the matching delay is increased, and meanwhile, the algorithm complexity of data flow generation and maintenance is improved.

Executing the instruction: the instruction of the matched flow Table entry is used as a forwarding execution set, and is initially an empty set, and each time one entry is added in the matching process, a plurality of actions are continuously accumulated until no steering Table (go to Table) exists, and the actions are stopped, and the instruction sets are executed together. The instructions are forward, drop, queue, modify domain, etc. Wherein, the forwarding can specify ports, physical ports, logical ports and reserved ports; and modifying fields, including group processing of data packets by using a group table, data packet header value modification, TTL modification and the like, wherein different processing combinations bring different delays.

3) The method comprises the steps that a plurality of paths are arranged end to end, a sending end measures each path, periodic measurement is carried out on packet loss, delay and jitter of each path, and a pre-estimation model of end-to-end delay and end-to-end packet loss can be established for each path through periodic accumulation. When each sending end sends a packet, the scheduling module estimates according to a pre-estimation model of delay and packet loss, and selects one path as a sending path of the packet according to algorithms such as shortest delay, minimum packet loss, minimum jitter and the like.

4) The SDN control equipment can be used for searching a current relatively proper path for a specific service, a flow table is generated for each related node and issued to the switch, data flows are processed point by point according to the flow table, the determination of end-to-end routing of the data flows is ensured, and the determination of delay is ensured as much as possible.

5) The sending end performs Quality of Service (QoS) class allocation on each data stream, and generally divides the data stream into 8 classes. The switch, upon receiving a packet, looks at its class and inserts the packet into the corresponding queue according to the class. The switch processes the high priority packets preferentially; and if the priorities are consistent, processing according to the entering sequence. Each packet occupies BUFFER (BUFFER) resources according to priority. Due to the limited BUFFER resources in the switch, e.g. when a certain high priority packet arrives, the BUFFER is already full, the switch will select the packet with the lowest priority to drop, and allocate the vacated BUFFER to the new incoming high priority packet for use. The delay and the jitter of the high-priority packet are ensured to be low and small as possible.

6) In the prior art, for packet loss, the packet loss feedback by a receiving end and the retransmission by the transmitting end are mostly adopted to complement the packet loss, and the delay is increased by the Time which is several times of Round-Trip Time (RTT); or adding Forward Error Correction (FEC) redundancy in the packet, performing aggregation coding and decoding at two ends, and introducing a certain processing delay.

The prior art has the following disadvantages:

1) TSN technique:

the TSN will provide a set of general time-sensitive mechanisms for the MAC layer of the ethernet protocol, and provide the possibility for interoperation between different protocol networks while ensuring the time certainty of ethernet data communication. Referring to fig. 2, the TSN does not cover the entire network, but the TSN is only a protocol standard for the second layer in the ethernet communication protocol model, namely, the data link layer (more precisely, the MAC layer). Therefore, the TSN supports only bridged networks and does not support end-to-end data flows that require routers.

3) In the prior art, a priority processing method is adopted, so that the performance of the high-priority data stream is improved. However, if a high-time-sensitive data flow uses a link, a higher-level data flow or a data flow of the same level is in a background flow and shares link and switch node resources, and whether a packet can be lost due to congestion depends heavily on the flow characteristics of the same-level and higher-level data flows sharing the resources of the switch with the packet, the queuing delay in the end-to-end delay of the packet in the data flow cannot be determined, and the queuing delay of a packet depends heavily on the flow characteristics of other data flows sharing the resources of the switch with the packet; also the delay jitter of the packets may be relatively large. But if the priority is high, only the incoming packets can be dropped, which is the main cause of congestion and packet loss. Therefore, the prior art cannot guarantee that the data flow does not have congestion and packet loss.

4) In the prior art, parameters such as end-to-end packet loss rate and delay are monitored through a network, and when path selection is performed, delay estimation is performed to expect that the end-to-end delay reaches a receiving end according to expectation, but the parameters measured by the network are accumulated parameters which represent the performance of a certain period of time in the past, and the network condition changes instantly. Such estimates are inaccurate; and the existing controller does not calculate the required resources of the data stream and performs maximum resource reservation point by point. Therefore, the actual transmission performance of the data stream depends heavily on the characteristics, level, etc. of the background traffic at that time, and therefore, it cannot be guaranteed that the delay of the data stream is lower than a certain value.

5) In the prior art, through a packet loss feedback compensation and redundancy coding method, a small processing delay is introduced, and high time-sensitive data stream application cannot tolerate a long time; however, the prior art still cannot guarantee link packet loss.

6) In the prior art, a special line method is adopted to ensure absolute low delay and near-zero packet loss, and dynamic sharing of path resources and switch resources cannot be realized, so that time-sensitive services and non-time-sensitive services cannot coexist.

The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.

The terms "comprises," "comprising," or any other variation thereof, in the description and claims of this application, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. Furthermore, the use of "and/or" in the specification and claims means that at least one of the connected objects, such as a and/or B, means that three cases, a alone, B alone, and both a and B, exist.

In the embodiments of the present invention, words such as "exemplary" or "for example" are used to mean serving as examples, illustrations or descriptions. Any embodiment or design described as "exemplary" or "e.g.," an embodiment of the present invention is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.

The techniques described herein are not limited to Long Time Evolution (LTE)/LTE Evolution (LTE-Advanced) systems, and may also be used for various wireless communication systems, such as Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), Frequency Division Multiple Access (FDMA), Orthogonal Frequency Division Multiple Access (OFDMA), Single-carrier Frequency-Division Multiple Access (SC-FDMA), and other systems.

The terms "system" and "network" are often used interchangeably. CDMA systems may implement Radio technologies such as CDMA2000, Universal Terrestrial Radio Access (UTRA), and so on. UTRA includes Wideband CDMA (Wideband Code Division Multiple Access, WCDMA) and other CDMA variants. TDMA systems may implement radio technologies such as Global System for Mobile communications (GSM). The OFDMA system may implement radio technologies such as Ultra Mobile Broadband (UMB), evolved-UTRA (E-UTRA), IEEE 802.11(Wi-Fi), IEEE 802.16(WiMAX), IEEE 802.20, Flash-OFDM, etc. UTRA and E-UTRA are parts of the Universal Mobile Telecommunications System (UMTS). LTE and higher LTE (e.g., LTE-A) are new UMTS releases that use E-UTRA. UTRA, E-UTRA, UMTS, LTE-A, and GSM are described in documents from an organization named "third Generation Partnership Project" (3 GPP). CDMA2000 and UMB are described in documents from an organization named "third generation partnership project 2" (3GPP 2). The techniques described herein may be used for both the above-mentioned systems and radio technologies, as well as for other systems and radio technologies.

Referring to fig. 3, an embodiment of the present invention provides a network control method, where an execution subject of the method is a network node (alternatively referred to as a forwarding device, a switch, or the like), and the method includes: step 301.

Step 301: and sending the working state parameters of the network nodes to control equipment so that the control equipment updates the network topology and the resource view according to the working state parameters of the network nodes.

Optionally, the network node may send the operating state parameter of the network node to the control device through a periodic heartbeat message.

The operating state parameters include one or more of: a network device type; an inherent bandwidth; the bandwidth can be allocated; best-effort (best-effort) bandwidth; allocated bandwidth; remaining allocated bandwidth; intrinsic BUFFER (BUFFER); an allocable buffer; a best effort buffer; an allocated buffer; the remaining allocated buffers.

In some embodiments, the method further comprises: after receiving the flow table from the control device, updating the flow table according to the service level of the data flow, and inserting or deleting a forwarding path of the data flow in the flow table of the relevant level to obtain an execution result of the hierarchical flow table; notifying the control device of an execution result of the hierarchical flow table.

In some embodiments, the method further comprises: after receiving the resource reservation information from the control device, reserving or canceling the resource according to the stream identifier to obtain an execution result of the resource reservation; and notifying the execution result of the resource reservation to the control equipment.

In some embodiments, the method further comprises: after receiving data flow from data source equipment, selecting a flow table according to the grade of the data flow, and matching; performing resource reservation on the network node according to the flow identification of the data flow.

In some embodiments, before selecting a flow table according to the rank of the data flow and performing matching, the method further comprises: judging whether the data stream needs to be copied or not according to the stream identification and/or the stream type of the data stream; if the data flow needs to be copied, copying each group of the data flow to form a plurality of data flows, and transferring the data flows into a flow table for matching; and if the copying is not needed, directly switching to flow table matching.

In some embodiments, the method further comprises: judging whether the network node is a last hop or not; if the network node is the last hop, analyzing whether the network node is a repeated packet according to the packet sequence number in the flow identifier, and if the network node is the repeated packet, deleting the repeated packet; analyzing the arrival time of the data stream according to the stream type, and setting a sending timer according to the timestamp; and if the sending timer is up, sending the data stream to a next hop.

In the embodiment of the invention, the topology and the resource condition of the whole network can be clearly known through centralized control, more reasonable path and resource reservation decisions can be made, and further, the data flow is ensured not to lose packets due to congestion through the resource reservation of network nodes; through the duplication elimination, the data flow is ensured not to be lost due to link packet loss, so that the end-to-end packet loss rate is ensured to be almost zero; furthermore, through resource reservation and path planning, the worst end-to-end delay can be ensured not to be lower than a preset value; further, end-to-end delay jitter is eliminated through packet storage. Furthermore, through resource reservation, bandwidth of common services can be reserved, and high-reliability services can be realized under the condition of not constructing a private network.

Referring to fig. 4, an embodiment of the present invention provides a network control method, where an execution subject of the method may be a control device, and the method includes: step 401 and step 402.

Step 401: acquiring working state parameters of network nodes;

for example, a periodic heartbeat message sent by the network node is received, where the periodic heartbeat message carries the operating state parameter of the network node.

The operating state parameters include one or more of: a network device type; an inherent bandwidth; the bandwidth can be allocated; best-effort (best-effort) bandwidth; allocated bandwidth; remaining allocated bandwidth; intrinsic BUFFER (BUFFER); an allocable buffer; a best effort buffer; an allocated buffer; the remaining allocated buffers.

Step 402: and updating the network topology and the resource view according to the working state parameters of the network nodes.

In some embodiments, the method further comprises: receiving a first message from an application device, the first message requesting service parsing; generating a flow table according to the first message; sending the flow table to the network node.

In some embodiments, the first message includes one or more of: the method comprises the steps of information of a source end, information of a destination end, information of a data stream, a service application type and a service application category identification.

In some embodiments, generating a flow table from the first message comprises:

the service analysis module identifies the service type applied by the application equipment according to the first message; if the applied service type is an applied resource, the service analysis module sends a second message to the path calculation module; the path calculation module acquires a network topology and a resource view and reserved resources of the network nodes from the topology management module according to the second message; the path calculation module carries out path calculation according to the network topology and the resource view and the reserved resources of the network nodes, and carries out end-to-end delay estimation on each path; the path calculation module sends a path set smaller than the maximum delay of the data stream to the resource calculation module; the resource calculation module acquires network topology, resource views and reserved resources of network nodes from the topology management module, performs resource estimation on the paths in the path set, selects paths meeting the resource requirements from the paths, and sends the information of the paths to the flow table generation module; and the flow table generating module generates a flow table according to the information of the path.

It can be understood that the reserved resources are occupied, and the reserved resources are ensured to be unoccupied.

In some embodiments, further comprising: if the path which meets the resource requirement does not exist, the path calculation module informs the service analysis module of the result; and the service analysis module feeds the result back to the application equipment.

In some embodiments, further comprising: the service analysis module receives a third message from the application device, wherein the third message indicates bearer revocation, and the third message carries a data stream identifier; the service analysis module informs a topology management module to release resources related to the data stream identification, and updates a network topology and a resource view; the topology management module notifies the flow table generation module to delete the flow table entry related to the data flow identification.

In some embodiments, the path computation module sends a set of paths less than a maximum delay of a data flow to the resource computation module, including: the path computation module determines a set of paths that are less than a maximum delay of a data stream; the path computation module determines a difference between a delay of each path in the set of paths and a maximum delay of a data stream; and the path calculation module is sequenced from small to large according to the difference value and sends the difference value to the resource calculation module.

In some embodiments, the service parsing module sends a second message to the path computation module, including: the service analysis module maps the service application type identification into one or more items of service peak value packet speed, data packet maximum length, end-to-end delay upper limit, packet loss upper limit and network bandwidth according to the established service model base, and sends the one or more items of the same source end, the same destination end, the data stream identification, the service application type and the service application type identification to the path calculation module.

In the embodiment of the invention, the topology and the resource condition of the whole network can be clearly known through centralized control, more reasonable path and resource reservation decisions can be made, and further, the data flow is ensured not to lose packets due to congestion through the resource reservation of network nodes; through the duplication elimination, the data flow is ensured not to be lost due to link packet loss, so that the end-to-end packet loss rate is ensured to be almost zero; furthermore, through resource reservation and path planning, the worst end-to-end delay can be ensured not to be lower than a preset value; further, end-to-end delay jitter is eliminated through packet storage. Furthermore, through resource reservation, bandwidth of common services can be reserved, and high-reliability services can be realized under the condition of not constructing a private network.

In the embodiment of the invention, the application of the service can be converted into the requirement of network indexes (bandwidth, delay, jitter and packet loss) in a certain time interval from end to end, and the control equipment carries out path calculation according to the requirement of the network indexes to generate the flow table; before path calculation, a deterministic network resource view is used, the original SDN network topology view and a network management system are integrated, resource reservation is determined to be occupied, and reserved resources are guaranteed not to be occupied; when calculating the path, taking the path with the minimum difference between the delay requirement and the calculation delay as the optimal path to reduce the network jitter; in the path decision process, delay and resources on path nodes are comprehensively considered, and effectiveness is guaranteed at the same time.

Referring to fig. 5, the network system is divided into an application device, a control device, and a network node. The application equipment comprises various application requirements and provides the requirements for network control through a northbound interface; the control equipment mainly constructs the latest network topology and resource view of the network, plans, controls and calculates and reserves network paths according to the application requirements, and informs the application equipment and the network node layer of the results. The control equipment comprises different modules such as link discovery, topology management, service analysis, path calculation, resource management, flow table generation and the like; the network node mainly comprises the classification processing of data flow according to the control requirement and the guarantee of resources. Including different modules for flow identification, classification flow table, resource reservation, packet replication, packet storage, and packet elimination.

The system mainly comprises four processes, namely a network management process, a network control process, resource reservation and data stream processing.

The network management process aims at collecting the latest network topology and resource view of the system; the purpose of the network control flow is to select a path meeting the requirement according to the application requirement, generate a flow table for the path and send the flow table to the switch. Each computation of the network control flow requires and updates the latest network topology and resource view of the network management flow. The resource reservation process is that the control device performs resource reservation on the resource decision of each network node. The data flow processing flow is to select the flow table for matching according to the grade of the data flow after the data flow is subjected to identity recognition, then set a sending timer according to the timestamp, and send the data flow to the next hop when the sending timer is up.

The first embodiment is as follows:

referring to fig. 6, a network management flow is illustrated.

Step 1: automatically starting a link discovery module after power-on;

step 2: a control device (alternatively referred to as a controller) uses a Link Layer Discovery Protocol (LLDP) as a Link Discovery Protocol. The link discovery module encapsulates the related information of the control device (such as main capability, management address, device identification, interface identification, etc.) in the LLDP.

And step 3: the control device sends the LLDP packet to the network node 1 (it is understood that the network node may also be referred to as a switch) associated with the control device through the packetout message, and the network node 1 stores the LLDP packet.

The Packet-out message functions as: the data related to the controller is sent to the OpenFlow switch, and is a message including a packet sending command.

And 4, step 4: the network node 1 floods the message through all ports. If the neighboring network node 2 is also open flow (openflow) forwarding, the network node 2 executes a flow table.

And 5: if the network node 2 does not have the flow table, the network node 2 makes a request to the control device through a packet _ in message. And if the openflow switch continuously broadcasts the packet to the neighbor, if the non-openflow switch exists, the packet passes through the openflow switch and reaches the other openflow switch, and the switch uploads the first packet to the control equipment, so that the non-openflow switch is known, and the other situation is the same.

The functions of the Packet-in message are: sending data packets arriving at an OpenFlow switch to a controller

The Packet-out message functions as: the data related to the controller is sent to the OpenFlow switch, and is a message including a packet sending command.

Step 6: the control device collects the packet _ in message and sends the packet _ in message to the topology management module, so that the network topology and the resource view can be drawn.

And 7: and after the topology is established, sending periodic heartbeat messages to request working state parameters of the switch.

Table 1:

and 8: and after the resource calculation matching is successful, updating the parameters for the next calculation.

Example two:

referring to fig. 7, a network control flow is illustrated.

Step 1: the application device (application layer) sends a request to the service resolving module through the northbound interface.

The request may contain one or more of the following: a source end (core network entry E-NODEB), a destination end (corresponding optional gate), a data stream ID, a service application type (activation/cancellation), and a service class number (corresponding requirement).

Step 2: the service analysis module identifies the type of the applied service, if the service is the applied resource, the service type number is mapped into the service peak packet speed, the maximum length of the data packet, the end-to-end delay upper limit, the packet loss upper limit and the network bandwidth according to a service model library established in advance, and the service peak packet speed, the maximum length of the data packet, the end-to-end delay upper limit, the packet loss upper limit and the network bandwidth are sent to the path calculation module together with a source end (core network entry E-NODEB), a destination end (corresponding optional gate), a data flow ID, a service application type (opening/canceling) and a service type number (corresponding requirement).

And step 3: and after receiving the request, the path calculation module acquires the current topology and resource conditions from the topology management module to calculate the path.

And 4, step 4: the path calculation module carries out path calculation according to the real-time information of the topology management module and carries out end-to-end delay estimation on each path.

And 5: the path computation module sorts the path sets with the requirement smaller than the maximum delay of the data flow from small to large according to the difference value, and sends the path sets to the resource computation module (the parameters are data flow ID, path ID (equipment ID set) and end-to-end delay estimation).

Step 6: and the resource calculation module reads real-time information of the topology and the equipment from the topology management module.

And 7: and the resource calculation module performs resource estimation point by point according to the path sequence sent by the path calculation module.

Selecting a first group of equipment ID sets, comparing the first group of equipment ID sets with allocable BUFFER, and outputting if the first group of equipment ID sets and the allocable BUFFER are met; if one is not satisfied, jumping out to compare the next path equipment; and if the path set is satisfied, selecting the path with the minimum node overlapping degree from the rest paths as a standby path.

Step 8 a: if the resource calculation module selects a path, the path information is sent to the flow table generation module, a flow table is generated and sent to the switching device (here, in order to improve the availability, the interface from the control device to the switching device complies with the openflow principle, and the modification of the device itself is reduced); meanwhile, the resource calculation module sends the calculation result to the topology management module, the topology management module carries out real-time updating and sends a success message to the path analysis module;

and step 8 b: and if no path meeting the requirement exists, informing the result to a path analysis module.

And step 9: and the path analysis module feeds the result back to the application layer.

Step 10: and if the application layer generates the load cancellation, the data flow ID and the service application type (activation/cancellation) are sent to the service analysis module.

Step 11: and the service analysis module informs the topology management module to release the related resources of the data stream.

Step 12: and informing to delete the data flow related flow table entry.

Example three:

referring to fig. 8, a resource reservation procedure is illustrated.

Step 1: the control equipment sends the generated flow tables to each relevant network node one by one;

step 2: after receiving the flow table, the network node updates the multi-level flow table according to the data flow level, and inserts/deletes the forwarding path of the data flow in the flow table of the relevant level;

and step 3: after receiving the resource reservation information, the network node performs resource reservation/cancellation on the network node according to the requirement;

and 4, step 4: the resource reservation and hierarchical flow table informs the network node of the execution result;

and 5: the network node informs the result to a topology management module of the control device, and updates the network topology and the resource view.

Example four:

referring to fig. 9, a data processing flow is illustrated.

Step 1: after starting to send data stream, the data source equipment accesses to the network node to analyze the stream number and stream type

Step 2 a: the network node judges whether the replication is needed, if so, each group of the flow is replicated to form two data flows, and then the two data flows are transferred to a flow table for matching;

and step 2 b: if the flow table is identified not to be copied, the flow table is directly transferred to a flow table matching stage

And step 3: selecting a flow table according to the grade of the data flow and matching; according to the stream number, performing resource reservation on the equipment, and using a buffer area;

and 4, step 4: judging whether the packet is a last hop, if the packet is a last hop, analyzing whether the packet is a repeated packet, and deleting the repeated packet;

and 5: analyzing the arrival time according to the category, and setting a sending timer according to the timestamp;

step 6: and when the sending timer is up, sending the next hop.

Referring to fig. 10, an embodiment of the present invention provides a network node, where the network node 1000 includes:

a sending module 1001, configured to send the working state parameter of the network node to a control device, so that the control device updates the network topology and the resource view according to the working state parameter of the network node.

The operating state parameters include one or more of: a network device type; an inherent bandwidth; the bandwidth can be allocated; best-effort (best-effort) bandwidth; allocated bandwidth; remaining allocated bandwidth; intrinsic BUFFER (BUFFER); an allocable buffer; a best effort buffer; an allocated buffer; the remaining allocated buffers.

In some embodiments, the sending module 1000 is further configured to: and sending the working state parameters of the network nodes to the control equipment through periodic heartbeat messages.

In some embodiments, the network node 1000 further comprises:

the first processing module is used for updating the flow table according to the service level of the data flow after receiving the flow table from the control equipment, and inserting or deleting a forwarding path of the data flow in the flow table of the relevant level to obtain an execution result of the grading flow table; notifying the control device of an execution result of the hierarchical flow table.

In some embodiments, the network node 1000 further comprises:

the second processing module is used for reserving or canceling the resources according to the flow identification after receiving the resource reservation information from the control equipment to obtain the execution result of the resource reservation; and notifying the execution result of the resource reservation to the control equipment.

In some embodiments, the network node 1000 further comprises:

the third processing module is used for selecting a flow table according to the grade of the data flow and matching the flow table after receiving the data flow from the data source equipment;

in some embodiments, the network node 1000 further comprises:

the fourth processing module is used for judging whether the data stream needs to be copied or not according to the stream identification and/or the stream type of the data stream; if the data flow needs to be copied, copying each group of the data flow to form a plurality of data flows, and transferring the data flows into a flow table for matching; and if the copying is not needed, directly switching to flow table matching.

In some embodiments, the network node 1000 further comprises:

the fifth processing module is used for judging whether the network node is a last hop or not; if the network node is the last hop, analyzing whether the network node is a repeated packet according to the packet sequence number in the flow identifier, and if the network node is the repeated packet, deleting the repeated packet; analyzing the arrival time of the data stream according to the stream type, and setting a sending timer according to the timestamp; and if the sending timer is up, sending the data stream to a next hop.

The network node provided in the embodiment of the present invention may execute the method embodiment shown in fig. 3, which has similar implementation principles and technical effects, and this embodiment is not described herein again.

Referring to fig. 11, an embodiment of the present invention provides a network node, where the network node 1100 includes: a first transceiver 1101 and a first processor 1102;

the first transceiver 1101 transmits and receives data under the control of the first processor 1102;

the first processor 1102 reads a program in the memory to perform the following operations: and sending the working state parameters of the network nodes to control equipment so that the control equipment updates the network topology and the resource view according to the working state parameters of the network nodes.

Optionally, the operating state parameters include one or more of: a network device type; an inherent bandwidth; the bandwidth can be allocated; best-effort (best-effort) bandwidth; allocated bandwidth; remaining allocated bandwidth; intrinsic BUFFER (BUFFER); an allocable buffer; a best effort buffer; an allocated buffer; the remaining allocated buffers.

In some embodiments, the first processor 1102 reads a program in memory to perform the following operations: and sending the working state parameters of the network nodes to the control equipment through periodic heartbeat messages.

In some embodiments, the first processor 1102 reads a program in memory to perform the following operations: after receiving the flow table from the control device, updating the flow table according to the service level of the data flow, and inserting or deleting a forwarding path of the data flow in the flow table of the relevant level to obtain an execution result of the hierarchical flow table; notifying the control device of an execution result of the hierarchical flow table.

In some embodiments, the first processor 1102 reads a program in memory to perform the following operations: after receiving the resource reservation information from the control device, reserving or canceling the resource according to the stream identifier to obtain an execution result of the resource reservation; and notifying the execution result of the resource reservation to the control equipment.

In some embodiments, the first processor 1102 reads a program in memory to perform the following operations: after receiving data flow from data source equipment, selecting a flow table according to the grade of the data flow, and matching;

in some embodiments, the first processor 1102 reads a program in memory to perform the following operations: judging whether the data stream needs to be copied or not according to the stream identification and/or the stream type of the data stream; if the data flow needs to be copied, copying each group of the data flow to form a plurality of data flows, and transferring the data flows into a flow table for matching; and if the copying is not needed, directly switching to flow table matching.

In some embodiments, the first processor 1102 reads a program in memory to perform the following operations: judging whether the network node is a last hop or not; if the network node is the last hop, analyzing whether the network node is a repeated packet according to the packet sequence number in the flow identifier, and if the network node is the repeated packet, deleting the repeated packet; analyzing the arrival time of the data stream according to the stream type, and setting a sending timer according to the timestamp; and if the sending timer is up, sending the data stream to a next hop.

The network node provided in the embodiment of the present invention may execute the method embodiment shown in fig. 3, which has similar implementation principles and technical effects, and this embodiment is not described herein again.

Referring to fig. 12, an embodiment of the present invention provides a control apparatus 1200 including:

an obtaining module 1201, configured to obtain a working state parameter of a network node;

an updating module 1202, configured to update the network topology and the resource view according to the working state parameter of the network node.

Optionally, the operating state parameters include one or more of: a network device type; an inherent bandwidth; the bandwidth can be allocated; best-effort (best-effort) bandwidth; allocated bandwidth; remaining allocated bandwidth; intrinsic BUFFER (BUFFER); an allocable buffer; a best effort buffer; an allocated buffer; the remaining allocated buffers.

In some embodiments, the obtaining module 1201 is further configured to: and receiving a periodic heartbeat message sent by the network node, wherein the periodic heartbeat message carries the working state parameter of the network node.

In some embodiments, the control device 1200 further comprises:

a sixth processing module, configured to receive a first message from an application device, where the first message requests service parsing; generating a flow table according to the first message; sending the flow table to the network node.

In some embodiments, the first message includes one or more of: the method comprises the steps of information of a source end, information of a destination end, information of a data stream, a service application type and a service application category identification.

In some embodiments, the control device 1200 further comprises: the system comprises a service analysis module, a path calculation module, a resource calculation module, a topology management module and a flow table generation module;

the service analysis module identifies the service type applied by the application equipment according to the first message; if the applied service type is an applied resource, the service analysis module sends a second message to the path calculation module; the path calculation module acquires a network topology and a resource view and reserved resources of the network nodes from the topology management module according to the second message; the path calculation module carries out path calculation according to the network topology and the resource view and the reserved resources of the network nodes, and carries out end-to-end delay estimation on each path; the path calculation module sends a path set smaller than the maximum delay of the data stream to the resource calculation module; the resource calculation module acquires network topology, resource views and reserved resources of network nodes from the topology management module, performs resource estimation on the paths in the path set, selects paths meeting the resource requirements from the paths, and sends the information of the paths to the flow table generation module; and the flow table generating module generates a flow table according to the information of the path.

In some embodiments, if there is no path that meets the resource requirement, the path computation module notifies the service resolution module of the result;

and the service analysis module feeds the result back to the application equipment.

In some embodiments, the service analysis module receives a third message from the application device, where the third message indicates bearer revocation, and the third message carries a data flow identifier;

the service analysis module informs a topology management module to release resources related to the data stream identification, and updates a network topology and a resource view;

the topology management module notifies the flow table generation module to delete the flow table entry related to the data flow identification.

In some embodiments, the path computation module determines a set of paths that are less than a maximum delay of a data flow;

the path computation module determines a difference between a delay of each path in the set of paths and a maximum delay of a data stream;

and the path calculation module is sequenced from small to large according to the difference value and sends the difference value to the resource calculation module.

In some embodiments, the service analysis module maps the service application type identifier into one or more of a service peak packet speed, a data packet maximum length, an end-to-end delay upper limit, a packet loss upper limit, and a network bandwidth according to the established service model library, and sends the one or more of the service peak packet speed, the data packet maximum length, the end-to-end delay upper limit, the packet loss upper limit, and the network bandwidth to the path computation module together with one or more of a same source end, a destination end, a data stream identifier, a service application type, and a service application type identifier.

The control device provided in the embodiment of the present invention may execute the method embodiment shown in fig. 4, which has similar implementation principles and technical effects, and this embodiment is not described herein again.

Referring to fig. 13, an embodiment of the present invention provides a control apparatus including: a second transceiver 1301 and a second processor 1302;

the second transceiver 1301 transmits and receives data under the control of the second processor 1302;

the second processor 1302 reads a program in the memory to perform the following operations: acquiring working state parameters of network nodes; and updating the network topology and the resource view according to the working state parameters of the network nodes.

Optionally, the operating state parameters include one or more of: a network device type; an inherent bandwidth; the bandwidth can be allocated; best-effort (best-effort) bandwidth; allocated bandwidth; remaining allocated bandwidth; intrinsic BUFFER (BUFFER); an allocable buffer; a best effort buffer; an allocated buffer; the remaining allocated buffers.

In some embodiments, the second processor 1302 reads a program in memory to perform the following operations: and receiving a periodic heartbeat message sent by the network node, wherein the periodic heartbeat message carries the working state parameter of the network node.

In some embodiments, the second processor 802 reads a program in memory to perform the following operations: receiving a first message from an application device, the first message requesting service parsing; generating a flow table according to the first message; sending the flow table to the network node.

In some embodiments, the first message includes one or more of: the method comprises the steps of information of a source end, information of a destination end, information of a data stream, a service application type and a service application category identification.

In some embodiments, the second processor 802 reads a program in memory to perform the following operations: identifying the service type applied by the application equipment according to the first message; if the applied service type is the applied resource, a second message is sent to the path calculation module through the service analysis module; the path calculation module acquires a network topology and a resource view and reserved resources of the network nodes from the topology management module according to the second message; the path calculation module carries out path calculation according to the network topology and the resource view and the reserved resources of the network nodes, and carries out end-to-end delay estimation on each path; the path calculation module sends a path set smaller than the maximum delay of the data stream to the resource calculation module; the resource calculation module acquires network topology, resource views and reserved resources of network nodes from the topology management module, performs resource estimation on the paths in the path set, selects paths meeting the resource requirements from the paths, and sends the information of the paths to the flow table generation module; and the flow table generating module generates a flow table according to the information of the path.

In some embodiments, the second processor 802 reads a program in memory to perform the following operations: if the path which meets the resource requirement does not exist, the result is notified to the service analysis module through the path calculation module; and the service analysis module feeds the result back to the application equipment.

In some embodiments, the second processor 802 reads a program in memory to perform the following operations: receiving a third message from the application device through the service analysis module, wherein the third message indicates bearer revocation, and the third message carries a data stream identifier; informing a topology management module to release resources related to the data stream identification through the service analysis module, and updating a network topology and a resource view; the topology management module notifies the flow table generation module to delete the flow table entry related to the data flow identification.

In some embodiments, the second processor 802 reads a program in memory to perform the following operations: determining, by the path computation module, a set of paths that is less than a maximum delay of a data flow; determining, by the path computation module, a difference between a delay of each path in the set of paths and a maximum delay of a data stream; and the path calculation module is sequenced from small to large according to the difference value and sends the difference value to the resource calculation module.

In some embodiments, the second processor 802 reads a program in memory to perform the following operations: and mapping the service application type identification into one or more items of service peak value packet speed, data packet maximum length, end-to-end delay upper limit, packet loss upper limit and network bandwidth through the service analysis module according to the established service model base, and sending the one or more items of the same source end, the same destination end, the data stream identification, the service application type and the service application type identification to the path calculation module.

The control device provided in the embodiment of the present invention may execute the method embodiment shown in fig. 4, which has similar implementation principles and technical effects, and this embodiment is not described herein again.

Referring to fig. 14, fig. 14 is a structural diagram of a communication device applied in the embodiment of the present invention, as shown in fig. 14, the communication device 1400 includes: a processor 1401, a transceiver 1402, a memory 1403, and a bus interface, wherein:

in one embodiment of the present invention, the communication device 1400 further comprises: a program stored on the memory 1403 and executable on the processor 1401, which when executed by the processor 1401 performs the steps in the embodiments shown in fig. 3-4.

In fig. 14, the bus architecture may include any number of interconnected buses and bridges, with one or more processors, represented by processor 1401, and various circuits, represented by memory 1403, linked together. The bus architecture may also link together various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. The bus interface provides an interface. The transceiver 1402 may be a plurality of elements including a transmitter and a receiver providing a means for communicating with various other apparatus over a transmission medium, it being understood that the transceiver 1402 is an optional component.

The processor 1401 is responsible for managing a bus architecture and general processing, and the memory 1403 may store data used by the processor 1401 in performing operations.

The communication device provided in the embodiment of the present invention may execute the method embodiments shown in fig. 3 to fig. 4, which have similar implementation principles and technical effects, and this embodiment is not described herein again.

The steps of a method or algorithm described in connection with the disclosure herein may be embodied in hardware or in software instructions executed by a processor. The software instructions may consist of corresponding software modules that may be stored in RAM, flash memory, ROM, EPROM, EEPROM, registers, hard disk, a removable hard disk, a compact disk, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. Of course, the storage medium may also be integral to the processor. The processor and the storage medium may reside in an ASIC. Additionally, the ASIC may reside in a core network interface device. Of course, the processor and the storage medium may reside as discrete components in a core network interface device.

Those skilled in the art will recognize that, in one or more of the examples described above, the functions described in this invention may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.

The above-mentioned embodiments, objects, technical solutions and advantages of the present invention are further described in detail, it should be understood that the above-mentioned embodiments are only exemplary embodiments of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made on the basis of the technical solutions of the present invention should be included in the scope of the present invention.

As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.

Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.

These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.

These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.

It will be apparent to those skilled in the art that various changes and modifications may be made in the embodiments of the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the embodiments of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to encompass such modifications and variations.

30页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种乱序报文的重排方法、装置及存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!