Message matching method, device, network equipment and medium

文档序号:1821901 发布日期:2021-11-09 浏览:14次 中文

阅读说明:本技术 一种报文匹配方法、装置、网络设备及介质 (Message matching method, device, network equipment and medium ) 是由 王洋 于 2021-06-25 设计创作,主要内容包括:本申请实施例提供一种报文匹配方法、装置、网络设备及介质,涉及通信技术领域。包括将N个第一数据报文加入N条流水线中,将每条流水线的级stage均设置为决策树的根节点;计算N条流水线中第一条流水线的第一哈希值,并异步从内存中预取并缓存第一哈希值对应的第一出接口数据,在从内存中预取第一出接口数据的同时计算N条流水线中第二条流水线的第二哈希值;当N条流水线中每条流水线的哈希值均已计算完成,从缓存中获取第一出接口数据;当第一出接口数据表征为用于转发第一条流水线中的第一数据报文时,将第一条流水线中的第一数据报文从第一条流水线中删除,并当接收到第二数据报文时,将第二数据报文加入第一条流水线。可以加快报文匹配速度。(The embodiment of the application provides a message matching method, a message matching device, network equipment and a medium, and relates to the technical field of communication. Adding N first data messages into N pipelines, and setting a stage of each pipeline as a root node of a decision tree; calculating a first hash value of a first pipeline in the N pipelines, asynchronously prefetching and caching first output interface data corresponding to the first hash value from a memory, and calculating a second hash value of a second pipeline in the N pipelines while prefetching the first output interface data from the memory; when the hash value of each pipeline in the N pipelines is calculated, acquiring first output interface data from a cache; and when the first output interface data is characterized in that the first output interface data is used for forwarding a first data message in the first pipeline, deleting the first data message in the first pipeline from the first pipeline, and adding a second data message into the first pipeline when the second data message is received. The message matching speed can be increased.)

1. A message matching method is characterized by comprising the following steps:

adding N first data messages into N pipelines, setting a stage of each pipeline as a root node of a decision tree, wherein each node in the decision tree represents a prefix length and the prefix lengths represented by the nodes are different;

calculating a first hash value of a first pipeline in the N pipelines, asynchronously prefetching first output interface data corresponding to the first hash value from a memory, storing the first output interface data in a cache, calculating a second hash value of a second pipeline in the N pipelines while prefetching the first output interface data from the memory, repeatedly executing the process of asynchronously prefetching output interface data corresponding to the hash value from the memory, storing the output interface data in the cache, and calculating the hash value of the pipeline while prefetching the interface data from the memory until the hash value of each pipeline in the N pipelines is calculated; the hash value of the pipeline is the hash value of the destination address of the data message in the pipeline and the prefix length represented by the stage;

when the hash value of each pipeline in the N pipelines is calculated, acquiring the first output interface data from the cache;

and deleting the first data message in the first pipeline from the first pipeline when the first output interface data is characterized to be used for forwarding the first data message in the first pipeline, and adding a second data message into the first pipeline when the second data message is received.

2. The method of claim 1, wherein after retrieving the first outbound interface data from the cache, the method further comprises:

and when the first output interface data is not characterized to be used for forwarding the first data message in the first pipeline, updating the stage of the first pipeline to be the right child node of the root node, and updating the output interface information in the first pipeline to be the first output interface data.

3. The method of claim 1 or 2, wherein after retrieving the first egress interface data from the cache, the method further comprises:

judging whether the right child node of the root node in the decision tree is empty or not;

if so, determining that the first output interface data is characterized as being used for forwarding a first data message in the first pipeline;

and if not, determining that the first output interface data is not characterized as being used for forwarding the first data message in the first pipeline.

4. The method of claim 1, further comprising:

and if the first output interface data corresponding to the first hash value cannot be acquired from the cache, updating the stage of the first pipeline into the left child node of the root node.

5. The method of claim 1, further comprising:

when the first output interface data is characterized to be used for forwarding a first data message in the first pipeline, updating the stage of the first pipeline to complete matching;

the deleting the first data packet in the first pipeline includes:

and after the stages of the N pipelines are updated once, deleting the first data message of the pipeline with the stage matched for completion from the pipeline.

6. A message matching apparatus, comprising:

the system comprises a setting module, a judging module and a judging module, wherein the setting module is used for adding N first data messages into N pipelines, and setting a stage of each pipeline as a root node of a decision tree, and each node in the decision tree represents a prefix length and the prefix lengths represented by the nodes are different;

the prefetching module is used for calculating a first hash value of a first pipeline in the N pipelines, asynchronously prefetching first output interface data corresponding to the first hash value from a memory, storing the first output interface data in a cache, calculating a second hash value of a second pipeline in the N pipelines while prefetching the first output interface data from the memory, repeatedly executing the process of asynchronously prefetching the output interface data corresponding to the hash value from the memory, storing the output interface data in the cache, and calculating the hash value of the pipeline while prefetching the interface data from the memory until the hash value of each pipeline in the N pipelines is calculated; the hash value of the pipeline is the hash value of the destination address of the data message in the pipeline and the prefix length represented by the stage;

the obtaining module is used for obtaining the first output interface data from the cache when the hash value of each pipeline in the N pipelines is calculated;

the setting module is further configured to delete the first data packet in the first pipeline from the first pipeline when the first egress interface data is characterized as being used for forwarding the first data packet in the first pipeline, and add the second data packet to the first pipeline when the second data packet is received.

7. The apparatus of claim 6, further comprising:

and the updating module is used for updating the stage of the first pipeline into the right child node of the root node and updating the output interface information in the first pipeline into the first output interface data when the first output interface data is not characterized to be used for forwarding the first data message in the first pipeline.

8. The apparatus of claim 6 or 7, further comprising: a determination module configured to:

judging whether the right child node of the root node in the decision tree is empty or not;

if so, determining that the first output interface data is characterized as being used for forwarding a first data message in the first pipeline;

and if not, determining that the first output interface data is not characterized as being used for forwarding the first data message in the first pipeline.

9. The apparatus of claim 6, further comprising:

and the updating module is used for updating the stage of the first pipeline into the left child node of the root node if the first output interface data corresponding to the first hash value is not obtained from the cache.

10. The apparatus of claim 6, further comprising:

an updating module, configured to update a stage of the first pipeline to complete matching when the first egress interface data is characterized to be used for forwarding the first data packet in the first pipeline;

the setting module is specifically configured to delete the first data packet in the pipeline in which the stage is matched for completion from the pipeline after the stages of the N pipelines are all updated once.

11. The network equipment is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing the communication between the processor and the memory through the communication bus;

a memory for storing a computer program;

a processor for implementing the method steps of any one of claims 1 to 5 when executing a program stored in the memory.

12. A computer-readable storage medium, characterized in that a computer program is stored in the computer-readable storage medium, which computer program, when being executed by a processor, carries out the method steps of any one of the claims 1-5.

Technical Field

The present application relates to the field of communications technologies, and in particular, to a method, an apparatus, a network device, and a medium for packet matching.

Background

At present, network devices such as a network switch or a router use a Forwarding Information Base (FIB) to direct Forwarding of an Internet Protocol (IP) message or a Named Data Networking (NDN) message. And in the message forwarding process, acquiring an output interface for forwarding the message from the FIB by using the longest matching principle. The FIB table entry where the output interface is located is the table entry with the longest prefix length matched with the destination address in the FIB.

And storing the FIB table entry into a hash table in a key-value form, wherein the hash value of the prefix/prefix length is used as a key, and an output interface corresponding to the prefix/prefix length is used as a value. In order to accelerate the message matching speed, a dynamic decision tree is introduced. Each node in the decision tree represents a possible prefix length, the root node of the decision tree is the prefix length with the highest hit rate in the message matching process, and other possible prefix lengths are sequentially distributed on the child nodes of the decision tree from high to low according to the matching hit rate.

When a message to be forwarded is received, the destination address of the message to be forwarded is matched in a mode of searching a decision tree, and an output interface corresponding to the message to be forwarded is obtained. Namely, the hash value of the prefix length corresponding to the node of the destination address and the decision tree is calculated, and then whether an output interface corresponding to the hash value exists is searched from a hash table of the memory. The CPU of the network device needs to access the memory many times to complete matching of a message to be forwarded, and when there are many received messages to be forwarded, the matching speed of the message to be forwarded is slow, and the message to be forwarded cannot be forwarded in time.

Disclosure of Invention

An object of the embodiments of the present application is to provide a method, an apparatus, a network device, and a medium for matching a message, so as to accelerate the speed of matching the message and forward the message in time. The specific technical scheme is as follows:

in a first aspect, the present application provides a packet matching method, including:

adding N first data messages into N pipelines, setting a stage of each pipeline as a root node of a decision tree, wherein each node in the decision tree represents a prefix length and the prefix lengths represented by the nodes are different;

calculating a first hash value of a first pipeline in the N pipelines, asynchronously prefetching first output interface data corresponding to the first hash value from a memory, storing the first output interface data in a cache, calculating a second hash value of a second pipeline in the N pipelines while prefetching the first output interface data from the memory, repeatedly executing the process of asynchronously prefetching output interface data corresponding to the hash value from the memory, storing the output interface data in the cache, and calculating the hash value of the pipeline while prefetching the interface data from the memory until the hash value of each pipeline in the N pipelines is calculated; the hash value of the pipeline is the hash value of the destination address of the data message in the pipeline and the prefix length represented by the stage;

when the hash value of each pipeline in the N pipelines is calculated, acquiring the first output interface data from the cache;

and deleting the first data message in the first pipeline from the first pipeline when the first output interface data is characterized to be used for forwarding the first data message in the first pipeline, and adding a second data message into the first pipeline when the second data message is received.

In one possible implementation, after the first egress interface data is obtained from the cache, the method further includes:

and when the first output interface data is not characterized to be used for forwarding the first data message in the first pipeline, updating the stage of the first pipeline to be the right child node of the root node, and updating the output interface information in the first pipeline to be the first output interface data.

In one possible implementation, after the first egress interface data is obtained from the cache, the method further includes:

judging whether the right child node of the root node in the decision tree is empty or not;

if so, determining that the first output interface data is characterized as being used for forwarding a first data message in the first pipeline;

and if not, determining that the first output interface data is not characterized as being used for forwarding the first data message in the first pipeline.

In one possible implementation, the method further includes:

and if the first output interface data corresponding to the first hash value cannot be acquired from the cache, updating the stage of the first pipeline into the left child node of the root node.

In one possible implementation, the method further includes:

when the first output interface data is characterized to be used for forwarding a first data message in the first pipeline, updating the stage of the first pipeline to complete matching;

the deleting the first data packet in the first pipeline includes:

and after the stages of the N pipelines are updated once, deleting the first data message of the pipeline with the stage matched for completion from the pipeline.

In a second aspect, the present application provides a packet matching apparatus, including:

the system comprises a setting module, a judging module and a judging module, wherein the setting module is used for adding N first data messages into N pipelines, and setting a stage of each pipeline as a root node of a decision tree, and each node in the decision tree represents a prefix length and the prefix lengths represented by the nodes are different;

the prefetching module is used for calculating a first hash value of a first pipeline in the N pipelines, asynchronously prefetching first output interface data corresponding to the first hash value from a memory, storing the first output interface data in a cache, calculating a second hash value of a second pipeline in the N pipelines while prefetching the first output interface data from the memory, repeatedly executing the process of asynchronously prefetching the output interface data corresponding to the hash value from the memory, storing the output interface data in the cache, and calculating the hash value of the pipeline while prefetching the interface data from the memory until the hash value of each pipeline in the N pipelines is calculated; the hash value of the pipeline is the hash value of the destination address of the data message in the pipeline and the prefix length represented by the stage;

the obtaining module is used for obtaining the first output interface data from the cache when the hash value of each pipeline in the N pipelines is calculated;

the setting module is further configured to delete the first data packet in the first pipeline from the first pipeline when the first egress interface data is characterized as being used for forwarding the first data packet in the first pipeline, and add the second data packet to the first pipeline when the second data packet is received.

In one possible implementation, the apparatus further includes:

and the updating module is used for updating the stage of the first pipeline into the right child node of the root node and updating the output interface information in the first pipeline into the first output interface data when the first output interface data is not characterized to be used for forwarding the first data message in the first pipeline.

In one possible implementation, the apparatus further includes: a determination module configured to:

judging whether the right child node of the root node in the decision tree is empty or not;

if so, determining that the first output interface data is characterized as being used for forwarding a first data message in the first pipeline;

and if not, determining that the first output interface data is not characterized as being used for forwarding the first data message in the first pipeline.

In one possible implementation, the apparatus further includes:

and the updating module is used for updating the stage of the first pipeline into the left child node of the root node if the first output interface data corresponding to the first hash value is not obtained from the cache.

In one possible implementation, the apparatus further includes:

an updating module, configured to update a stage of the first pipeline to complete matching when the first egress interface data is characterized to be used for forwarding the first data packet in the first pipeline;

the setting module is specifically configured to delete the first data packet in the pipeline in which the stage is matched for completion from the pipeline after the stages of the N pipelines are all updated once.

In a third aspect, an embodiment of the present application provides a network device, including a processor, a communication interface, a memory, and a communication bus, where the processor and the communication interface complete communication between the memory and the processor through the communication bus;

a memory for storing a computer program;

a processor configured to implement the method steps of the first aspect when executing the program stored in the memory.

In a fourth aspect, the present application further provides a computer-readable storage medium, in which a computer program is stored, and the computer program, when executed by a processor, implements the method in the first aspect.

In a fifth aspect, embodiments of the present application further provide a computer program product containing instructions, which when run on a computer, cause the computer to perform the data processing method described in the first aspect.

The message matching method, the device, the network equipment and the medium provided by the embodiment of the application can match N first data messages through N pipelines, after the N first data messages are added into the N pipelines, the first hash value of a first pipeline in the N pipelines can be calculated, first output interface data corresponding to the first hash value are asynchronously prefetched from a memory, the first output interface data are stored in a cache, and the second hash value of a second pipeline in the N pipelines is calculated while the first output interface data are prefetched from the memory. Therefore, when the first output interface data corresponding to the first hash value needs to be obtained, the memory does not need to be accessed, the first output interface data can be directly obtained from the cache, and the time required by message matching is reduced. And if the first output interface data is characterized in that the first output interface data is used for forwarding the first data message in the first pipeline, deleting the first data message in the first pipeline from the first pipeline, and adding the second data message into the first pipeline for processing. Compared with the prior art that the received second data message can be processed after the first data messages in the N pipelines are all forwarded, in the embodiment of the application, the second data message can be added into the pipeline and starts to be processed as long as the data message is deleted from the pipeline, and the matching and forwarding speed of the received data message can be accelerated.

Of course, not all advantages described above need to be achieved at the same time in the practice of any one product or method of the present application.

Drawings

In order to more clearly illustrate the embodiments of the present application and the technical solutions of the prior art, the following briefly introduces the drawings required for the embodiments and the prior art, and obviously, the drawings in the following description are only some embodiments of the present application, and other drawings can be obtained by those skilled in the art without creative efforts.

FIG. 1 is an exemplary diagram of a decision tree provided by an embodiment of the present application;

FIG. 2 is an exemplary diagram of another decision tree provided by an embodiment of the present application;

FIG. 3 is an exemplary diagram of a matrix for generating a decision tree according to an embodiment of the present disclosure;

fig. 4a and fig. 4b are exemplary diagrams of a process for generating a decision tree according to an embodiment of the present application;

5a, 5b and 5c are schematic diagrams illustrating another process for generating a decision tree according to an embodiment of the present application;

fig. 6 is a flowchart of a message matching method according to an embodiment of the present application;

fig. 7 is a schematic structural diagram of a message matching apparatus according to an embodiment of the present application;

fig. 8 is a schematic structural diagram of a network device according to an embodiment of the present application.

Detailed Description

In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is further described in detail below by referring to the accompanying drawings and examples. It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.

For ease of understanding, the related concepts related to the embodiments of the present application will be first described.

Firstly, an FIB table entry.

The FIB table entry is used for guiding the forwarding of the IPv4/IPv6/NDN message. The core structure of the FIB table entry is as follows: prefix/prefix length + egress interface. Where the prefix length is used to indicate which part of the prefix is a valid match.

1. Taking the FIB of IPv4 as an example, the FIB includes the following two entries:

entry 1: IP prefix: 10.0.0.0/8 output interface: and an interface 2.

Entry 2: IP prefix: 10.20.0.0/16 Outlet interface: an interface 3.

Wherein, 10.0.0.0 of the "10.0.0.0/8" is a prefix, and "8" is a prefix length, which means that if the destination address of the received message matches with the "10" in the prefix, the message is forwarded through the interface 2.

The prefix "10.20.0.0" in "10.20.0.0/16" and the prefix length "16" represent that if the destination address of the received message matches "10.20" in the prefix, the message is forwarded over interface 3.

In addition, the rule of longest match is followed when the message is matched based on the FIB table items, that is, if the destination address of the message is matched with a plurality of table items, the output interface in the table item with the longest prefix length in the matched table items is selected.

For example, assuming that the destination IP address of the received packet is 10.0.0.1, the destination IP address can only match the Entry1, so the network device selects the egress interface 2 in the Entry1 as the forwarding interface of the packet.

For another example, assume that the destination IP address of the received packet is 10.20.0.1, and the destination IP address matches both Entry1 and Entry2, but the prefix length 16 of Entry2 is longer than the prefix length 8 of Entry1, so the network device may select the egress interface 3 in Entry2 as the forwarding interface of the packet.

2. Taking the FIB of the NDN network as an example, the NDN network is similar to the IP network, and changes the IP address into a directory form, which is a directory supporting text characters.

For example, the FIB includes three entries as follows:

entry 1: NDN prefix: interface is exported to/Book/Fiction/Science: interface 1

Entry 2: NDN prefix: the/Book output interface: interface 2

Entry 3: NDN prefix: the/Shoe output interface: interface 3

Where "/" is used to layer prefixes, the FIB of NDN also follows the longest prefix matching principle.

For example, if the destination address of the received message is: and the destination address is matched with both Entry1 and Entry2, but the prefix length of Entry1 is 3, and the prefix length of Entry2 is 1, so that the network device uses the outgoing interface 1 included in Entry1 as a forwarding interface of the message.

For another example, if the destination address of the received message is: and/Book/Fiction/Antient, determining that the destination address is matched with Entry2, so that the network device uses the outgoing interface 2 included in Entry2 as a forwarding interface of the message.

IP networks can be seen as special NDN networks: NDN supports arbitrary characters and prefix segmentation with arbitrary length, IP network supports the segmentation of 0/1 bits, and IP message has fixed maximum prefix length. The maximum prefix length of the IPv6 is 128 bits, and the maximum prefix length of the IPv4 is 32 bits.

Second, HASH FIB

Prefix information and outgoing interface information included in entries of the FIB are stored in a hash table of a memory in a key-value form, and it is assumed that the FIB includes the following entries:

after receiving the message, the destination address of the message needs to be matched with the prefix in each table entry, and the interface corresponding to the longest prefix that is matched is used as the forwarding interface of the message. However, the memory needs to be accessed once every time one table entry is matched, which results in a long time required for message matching.

In order to accelerate the matching speed of the message, a dichotomy can be adopted to search the table entry matched with the destination address of the message. For the message with the destination address of/a/b/c/d/f, if the dichotomy search is adopted, firstly, the prefix/a/b/c is tried to be matched, and the prefix/a/b/c does not exist in the table entry of the FIB. At this time, if the network device incorrectly considers that a prefix longer than a prefix/a/b/c cannot be matched, the network device will find a shorter prefix, such as prefix/a, in the FIB and forward the packet through the egress interface 2 in Entry 2. But in practice the longest prefix that the destination address matches should be the prefix/a/b/c/d in Entry3, resulting in the message being forwarded incorrectly.

In order to solve the problem that the HASH FIB cannot be correctly searched by the bisection method, a Virtual Entry (Virtual Entry) may be added to the HASH FIB.

Still taking the HASH FIB as an example, the virtual table Entry on the Entry1 path includes prefixes of: a,/a/b,/a/b/c,/a/b/c/d, where a and/a/b/c/d are already present and therefore not added. Two virtual entries prefixed by/a/b and/a/b/c need to be supplemented to Entry 1.

The prefix length of Entry2 is 1, so there is no need to supplement a virtual Entry.

The virtual table Entry on the Entry3 path includes prefixes of: a,/a/b,/a/b/c, where/a already exists, so a virtual Entry with prefix/a/b/c and/a/b can be supplemented.

The virtual table Entry on the Entry4 path includes prefixes of: f, a virtual Entry with prefix/f may be supplemented.

After supplementing the virtual Entry, the resulting HASH FIB includes:

after the virtual Entry is supplemented, the table Entry can be searched in a dichotomy mode. For example, if the destination address of the received message is/a/b/w/x/y/z. The absence of/a/b/w is determined by searching the HASH FIB, so that the prefix matching which is better than/a/b/w can be determined to be absent for/a/b/w/x/y/z, and therefore binary search can be performed directly from/a/b/w in a recursive mode, and the searching speed of the HASH FIB is greatly increased. The number of searches may be increased from N to Log2(N), where N is the maximum prefix length, such as 128 for an IPv6 network.

Three, optimal decision tree

In order to further improve the packet matching speed, in the embodiment of the present application, all possible prefix lengths may be generated into a decision tree. The principle of generating the decision tree is that the prefix length with higher matching hit rate is closer to the root node of the decision tree, and the prefix length represented by the left child node of each node is shorter than the prefix length of the right child node.

For example, if most of the messages received by the network device hit a Prefix with a length of 128 (hereinafter, abbreviated as Prefix 128), the generated decision tree is as shown in fig. 1. The Prefix 128 is used as a root node, the lengths of other prefixes are all left branches of the root node, and since the maximum Prefix length of the Ipv6 message is 128bit, no Prefix longer than 128 exists, the right child node of the root node is empty.

After receiving the message of Prefix 128, the interface can be determined only by matching the root node of the decision tree once.

For another example, if most of the messages received by the network device hit a Prefix with a length of 127 (i.e., Prefix 127), the generated decision tree is as shown in fig. 2. The Prefix 127 is used as a root node, the Prefix 128 is used as a right child node of the root node, and other Prefix lengths are left branches of the root node.

After receiving the message of the Prefix 127, matching the root node Prefix 127 of the decision tree, wherein the matching is successful at this time, and based on the principle of matching the longest Prefix, the matching is required to be performed with the right child node Prefix 128 of the root node, and the matching is failed at this time, and the right child node Prefix 128 has no left and right branches, so that the message of the Prefix 127 is finally determined to be matched with the root node. The interface can be determined through the process of matching the decision tree twice.

The decision tree may be generated according to the probability of each prefix length hit by the forwarding packet, and for convenience of description, in the embodiment of the present application, the involved symbol meanings are as follows:

Pxrepresenting the probability of the message hitting the Prefix X;

cost (m, n) represents the expectation of the steps that would be required to find the optimal decision tree from prefix m to prefix n.

Taking ipv6 as an example, the goal of building the optimal decision tree is to solve cost (1,128), i.e., find the optimal decision tree from prefix1 to prefix 128, expecting the steps required to consume.

For cost (1,128), assuming that 50 is selected as the root node, then the packet has P1+P2+…+P49Into the left branch of 50. Has P50+P51+…+P128Into the right branch of 50.

At this point 50 is selected as the root nodeThe desired consumption steps of (a) are: 1+ (P)1+P2+…+P49)*Cost(1,49)+(P50+P51+…+P128)*Cost(51,128)。

In the above formula, 1 is the step of matching the root node consumption of the decision tree. Cost (1,49) is the step of matching the left branch consumption of the decision tree. Cost (51,128) is the step of matching the right branch consumption of the decision tree.

When the root node is selected, the cost (1,128) is the minimum, namely, the less steps are consumed by matching the decision tree, the faster the overall matching speed of the message is.

Therefore, cost (1,128) ═ min (1+ (P)1+P2+…+Pj-1)*Cost(1,j-1)+(Pj+Pj+1+…+P128) Cost (j +1,128)), where 1<=j<128, and it is specified that when j +1 > 128, Cost (j +1,128) is 0.

And the values of Cost (1,1), Cost (2,2), … and Cost (n, n) are fixed to be 1.

Then, for Cost (m, n), the formula is:

Cost(m,n)=min(1+(Pm-1+Pm+…Pj-1)/(Pm-1+Pm+…Pn)*Cost(m,j-1)+(Pj+…Pn)/(Pm-1+Pm+…Pn)*Cost(j+1,n))。

wherein m is<=j<N, and when j +1 > n, it is specified that Cost (j +1, n) is 0. If m-1 is 0, then Pm-1Calculated as 0.

There are two cases of matching messages from prefix m to prefix n: one is that the longest prefix of the message matching at this time is m-1, and the cost (m, n) is entered to confirm whether there is no better match than the prefix m-1. Another situation is that the longest prefix of the message match is longer than m-1, so that a better match needs to be found from prefix m to prefix n.

For the first case, if the probability of a packet hitting m-1 is high, but the probability of hitting m is low, assuming that the probability of the packet hitting m-1 is not considered when calculating cost (m, n), the node corresponding to the prefix m in the decision tree of cost (m, n) may be far from the root node of cost (m, n), which results in more steps being consumed when determining the longest match for m-1. Therefore, in the embodiment of the application, the probability of message hit m-1 is also considered when calculating cost (m, n), so that the message matching speed is accelerated.

In combination with the definition of Cost (m, n), the optimal decision tree can be dynamically generated by a matrix.

For IPv6 networks, the maximum length of the prefix is 128 bits, so a 128X 128 matrix can be defined, each point of the matrix containing two pieces of information: the Cost representing the node and the split point selected for computing the Cost.

For example, point (2,5) records Cost (2,5) and the split point selected when Cost (2,5) is calculated.

When the optimal decision tree is calculated through a matrix, a matrix can be generated, firstly, Cost (m, n) and Split point (Split) are calculated for points on the diagonal line of the matrix, then, the diagonal line is wholly shifted to the right by one grid, Cost (m, n) and Split point (Split) of each point on the diagonal line obtained after right shifting are calculated, and then, right shifting is continued until all the points on the right side of the diagonal line of the matrix are calculated.

Taking the matrix of 5X5 as an example, the calculation order is shown in fig. 3.

The first round starts to calculate the Cost and the split point corresponding to the point at the left oblique line shadow from the upper left to the lower right.

And the second round starts to calculate Cost and a splitting point corresponding to the point at the cross line shadow from the upper left to the lower right.

And the third round starts to calculate the Cost and the splitting point corresponding to the point at the vertical line shadow from the upper left to the lower right.

And the fourth round calculates the Cost and the split point corresponding to the point at the right oblique line shadow from the upper left to the lower right.

The fifth round calculates the Cost and split points corresponding to the points at the grid shadow.

And then, traversing from (1,5) of the matrix to obtain a final decision tree.

Assuming that the Split of node (1,5) is 2, the resulting decision tree shape is shown in FIG. 4a, with the root node being prefix2, the left child node corresponding to (1,1) in the matrix, and the right child node corresponding to (3,5) in the matrix. Assuming that (3,5) corresponds to a Split of 4, the optimal decision tree shape is constructed as shown in fig. 4 b.

The method for constructing the optimal decision tree is described below with reference to specific examples, and a 5 × 5 matrix is constructed by taking the longest prefix as an example.

The network device may periodically calculate the probability of the received packet hitting each prefix length, and generate an optimal decision tree based on the probability, so that the prefix length with higher hit rate is closer to the root node.

Suppose that: the probability of hitting Prefix1 is 20%.

The probability of hitting Prefix2 is 5%.

The probability of hitting Prefix3 is 10%.

The probability of hitting Prefix4 is 10%.

The probability of hitting Prefix5 is 55%.

First, the Cost and the split point of each point on the diagonal line in the matrix, Cost (1,1), Cost (2,2), …, and Cost (5,5), are calculated, the values are fixed to 1, and no split point exists, so the calculation results are shown in table 1.

TABLE 1

Next, Cost (1,2), Cost (2,3), Cost (3,4), Cost (4,5) are calculated

In calculating Cost (1,2), 1 or 2 can be selected as the split point,

cost (1,2) when 1 is selected as the split point is: 1+ (20% + 5%)/(20% + 5%) + Cost (2,2) ═ 2;

cost (1,2) when 2 is selected as the split point is: 1+ 20%/(20% + 5%) + Cost (1,1) ═ 1.8

Therefore, the split point of Cost (1,2) should be 2, and Cost (1,2) should be 1.8.

In calculating Cost (2,3), 2 or 3 may be selected as the split point,

selecting 2 as the split point-Cost 1.42 for Cost (2,3) ═ 1+ (5% + 10%)/(20% + 5% + 10%) + Cost (3,3) ═ 1.42;

when 3 is selected as the split point, Cost (2,3) ═ 1+ (20% + 5%)/(20% + 5% + 10%). Cost (2,2) ═ 1.71;

therefore, the split point of Cost (2,3) should be 2, and Cost (2,3) should be 1.42.

In calculating Cost (3,4), 3 or 4 may be selected as the split point,

when 3 is selected as the split point, Cost (3,4) ═ 1+ (10% + 10%)/(5% + 10% + 10%) + Cost (4,4) ═ 1.8;

when 4 is selected as the split point, Cost (3,4) ═ 1+ (5% + 10%)/(5% + 10% + 10%). Cost (3,3) ═ 1.6;

therefore, the split point of Cost (3,4) should be 4, and Cost (3,4) should be 1.6.

In calculating Cost (4,5), 4 or 5 may be selected as the split point

When 4 is selected as the split point, Cost (4,5) ═ 1+ (10% + 55%)/(10% + 10% + 55%)/Cost (4,4) ═ 1.86;

when 5 is selected as the split point, its Cost (4,5) ═ 1+ (10% + 10%)/(10% + 10% + 55%) × Cost (5,5) ═ 1.26

Therefore, the split point of Cost (4,5) should be 5, and Cost (4,5) is 1.26.

The matrix of table 1 is updated at this time as the following table 2.

TABLE 2

Next, Cost (1,3), Cost (2,4), Cost (3,5) are calculated

For Cost (1,3), 1,2 or 3 can be chosen as the split point:

when 1 is selected as the split point, Cost (1,3) ═ 1+ (20% + 5% + 10%)/(20% + 5% + 10%) × Cost (2,3) ═ 2.42;

when 2 is selected as the split point, Cost (1,3) ═ 1+ 20%/(20% + 5% + 10%). Cost (1,1) + (5% + 10%)/(20% + 5% + 10%). Cost (3,3) ═ 2;

when 3 is selected as the split point, Cost (1,3) ═ 1+ (20% + 5%)/(20% + 5% + 10%). Cost (1,2) ═ 2.28;

therefore, the split point of Cost (1,3) should be 2, and Cost (1,3) should be 2.

For Cost (2,4), 2,3 or 4 may be chosen as the split point:

when 2 is selected as the split point, Cost (2,4) ═ 1+ (5% + 10% + 10%)/(20% + 5% + 10% + 10%)/(-Cost (3,4) ═ 2.15;

when 3 is selected as the split point, Cost (2,4) ═ 1+ (20% + 5%)/(20% + 5% + 10% + 10%/(20% + 10% + 10%)/Cost 4,4) ═ 2;

when 4 is selected as the split point, Cost (2,4) ═ 1+ (20% + 5% + 10%)/(20% + 5% + 10% + 10%) _ Cost (2,3) ═ 2.10;

therefore, the split point of Cost (2,4) should be 3, and Cost (2,4) should be 2.

For Cost (3,5), 3,4 or 5 may be selected as the split point:

when 3 is selected as the split point, Cost (3,5) ═ 1+ (10% + 10% + 55%)/(5% + 10% + 10% + 55%) _ Cost (4,5) ═ 2.18;

when 4 is selected as the split point, Cost (3,5) ═ 1+ (5% + 10%)/(5% + 10% + 10% + 55%)/Cost (5,5) ═ 2%

When 5 is selected as the split point, Cost (3,5) ═ 1+ (5% + 10% + 10%)/(5% + 10% + 10% + 55%) _ Cost (3,4) ═ 1.5;

therefore, the split point of Cost (3,5) should be 5, and Cost (3,5) should be 1.5.

The matrix shown in table 2 is now updated as table 3 below:

TABLE 3

Cost (1,4) and Cost (2,5) are then calculated.

For Cost (1,4), split points of 1,2,3 or 4 may be selected.

When 1 is selected as the split point, Cost (1,4) ═ 1+ (20% + 5% + 10% + 10%)/(20% + 5% + 10% + 10%) + Cost (2,4) ═ 3;

when 2 is selected as the split point, Cost (1,4) ═ 1+ (20%)/(20% + 5% + 10% + 10%)/(3, 4) ═ 2.33;

when 3 is selected as the split point, Cost (1,4) ═ 1+ (20% + 5%)/(20% + 5% + 10% + 10%)/(20% + 10% + 10%)/Cost (4,4) ═ 2.44;

when 4 is selected as the split point, Cost (1,4) ═ 1+ (20% + 5% + 10%)/(20% + 5% + 10% + 10%) _ Cost (1,3) ═ 2.55;

the split point of Cost (1,4) should be 2, and Cost (1,4) is 2.33.

For Cost (2,5), alternative split points are 2,3,4 or 5.

When 2 is selected as the split point, Cost (2,5) ═ 1+ (5% + 10% + 10% + 55%)/100% × Cost (3,5) ═ 2.20;

when 3 is selected as the split point, Cost (2,5) ═ 1+ (20% + 5%)/100%. times Cost (2,2) + (10% + 10% + 55%)/100%. times Cost (4,5) ═ 2.195;

when 4 is selected as the split point, Cost (2,5) ═ 1+ (20% + 5% + 10%)/100%. Cost (2,3) + (10% + 55%)/100%. Cost (5,5) ═ 2.147;

when 5 is selected as the split point, 1+ (20% + 5% + 10% + 10%)/100%. Cost (2,4) ═ 1.9

Therefore, the split point of Cost (2,5) should be 5, and Cost (2,5) should be 1.9.

The matrix shown in table 3 is now updated as table 4 below:

TABLE 4

Finally, Cost (1,5) is calculated, with optional Split points of 1,2,3,4 or 5.

When 1 is selected as the split point, Cost (1,5) ═ 1+ (20% + 5% + 10% + 10% + 55%)/100% × Cost (2,4) ═ 3;

when 2 is selected as the split point, Cost (1,5) ═ 1+ 20%/100%. Cost (1,1) + 80%/100%. Cost (3, 5): 2.4;

when 3 is selected as the split point, Cost (1,5) ═ 1+ 25%/100%. Cost (1,2) + 75%/100%. Cost (4, 5): 2.395;

when 4 is selected as the split point, Cost (1,5) ═ 1+ 35%/100%. Cost (1,3) + 65%/100%. Cost (5, 5): 2.35;

when 5 is selected as the split point, Cost (1,5) ═ 1+ 45%/100%. Cost (1,4) ═ 2.04

Therefore, the split point of Cost (1,5) should be 5, and Cost (1,5) should be 2.04.

The matrix shown in table 4 is now updated as table 5 below:

TABLE 5

Starting from (1,5) of the matrix, the split points are then traversed layer by layer to generate a decision tree.

First, the split point of (1,5) is 5, and in this case, the tree shape is as shown in fig. 5a, with prefix5 as the root node, the left side branches of the root node being prefix1 to prefix4, and the right side branches being empty.

Then, it can be seen from the matrix that the split point of (1,4) is 2, and the shape of the decision tree is shown in fig. 5b, where the left branch of the split point 2 is 1, and the right branch is prefix3 to prefix 4.

The split point of (3,4) is 4, so the shape of the decision tree is as shown in fig. 5c, the left branch of the split point 4 is 3, no right branch, and fig. 5c is the generated optimal decision tree.

In the embodiment of the present application, the optimal decision tree is cached in a cache of a CPU in an array structure, and the corresponding FIB is stored in a memory in a key-value form, and in a packet matching process, the memory needs to be frequently accessed to obtain an output interface corresponding to a prefix length for packet matching, so that the packet matching speed is relatively slow.

In order to accelerate the matching speed, the CPU can be prompted to access data in some memories by a data pre-fetching instruction (Prefetch), so that the memory management module of the CPU asynchronously acquires the data to be accessed from the memories.

However, because the left and right branches of the optimal decision tree are not balanced, the lengths of the search paths for each packet to be forwarded are not consistent, for example, if packet1 and packet2 are matched at the same time, an interface can be determined if packet1 is subjected to one decision tree matching, and an interface can be determined if packet2 is subjected to 5 decision tree matching. Then, after the message 1 is matched once, the forwarding of the message 1 can be completed. In the following, matching is actually performed on the message 2, so that the prefetching operation is meaningless, and the problem of slow message matching still exists.

In order to solve the above problem, an embodiment of the present application provides a packet matching method, as shown in fig. 6, where the method includes:

s601, adding N first data messages into N pipelines, and setting the stage (stage) of each pipeline as a root node of a decision tree.

Wherein, the decision trees described below are all optimal decision trees. Each node in the decision tree represents a prefix length. N may be preset based on the processing capability of the CPU, and as an example, N may be 2.

Suppose that the current network device receives two data messages, which are respectively a message 1 and a message 2, and the destination addresses of the message 1 and the message 2 are respectively IPv6Addr1 and IPv6Addr 2. Message 1 may be added to pipeline1 and message 2 may be added to pipeline2, with the stages of pipeline1 and pipeline2 both being root nodes of the decision tree. Since the root node of the decision tree is the prefix length with the maximum probability of being matched, in the embodiment of the application, each message to be forwarded is matched from the root node of the decision tree, so that the message matching speed can be improved.

S602, calculating a first hash value of a first pipeline in the N pipelines, asynchronously prefetching first output interface data corresponding to the first hash value from a memory, storing the first output interface data in a cache, calculating a second hash value of a second pipeline in the N pipelines while prefetching the first output interface data from the memory, repeatedly executing the process of asynchronously prefetching output interface data corresponding to the hash value from the memory, storing the output interface data in the cache, and calculating the hash value of the pipeline while prefetching the interface data from the memory until the hash value of each pipeline in the N pipelines is calculated.

The hash value of each pipeline is the hash value of the prefix length corresponding to the destination address and the stage of the data packet included in the pipeline.

The process of prefetching the first output interface data from the memory is asynchronous operation, and calculating the second hash value of the second pipeline of the N pipelines while prefetching the first output interface data from the memory means: after the first hash value of the first pipeline is calculated, the thread of the CPU continues to calculate the hash value of the second pipeline, and meanwhile, the memory management module of the CPU starts to asynchronously prefetch the first output interface data from the memory.

Similarly, after the thread of the CPU calculates the hash value of the second pipeline, the thread of the CPU continues to calculate the hash value of the third pipeline no matter whether the memory management module of the CPU prefetches the first output interface data.

Continuing with the example in the previous step, assuming that the prefix length of the root node is prefix len1, the network device may calculate a hash value between IPv6Addr1 and prefix len1, and obtain HashValue 1. And asynchronously executing the prefetch instruction of the CPU of the network equipment, so that the memory management module of the CPU asynchronously obtains the output interface data corresponding to the HashValue1 from the memory, and caches the output interface data in the cache of the CPU.

The operation of executing the prefetch instruction is asynchronous operation, namely after the thread of the CPU calculates HashValue1, the thread of the CPU continues to calculate the hash value of IPv6Addr2 and Prefix Len1 to obtain HashValue2, the operation of executing the prefetch instruction of the CPU is executed, and the process that the thread of the CPU calculates the hash value for each pipeline in the N pipelines in sequence is not influenced.

And S603, when the hash value of each pipeline in the N pipelines is calculated, acquiring the first output interface data from the cache.

Because the CPU has previously obtained the outbound interface data corresponding to HashValue1 from the memory and cached in the cache of the CPU, when the network device needs to obtain the outbound interface data corresponding to HashValue1 after completing the hash values of the N pipelines, the outbound interface data corresponding to HashValue1 is cached in the cache of the CPU, so that the network device can obtain the outbound interface data corresponding to HashValue1 from the cache.

Similarly, after the HashValue2 is obtained by the network device through calculation, the egress interface data corresponding to the HashValue2 is asynchronously prefetched and cached in the cache of the CPU. Furthermore, when the network device needs to acquire the outgoing interface data corresponding to HashValue2, the network device does not need to access the memory, but can directly acquire the outgoing interface data corresponding to HashValue2 from the cache.

Accordingly, the network device may sequentially obtain the output interface data corresponding to the hash of each pipeline from the cache.

S604, when the first output interface data is characterized to be used for forwarding the first data message in the first pipeline, deleting the first data message in the first pipeline from the first pipeline, and when receiving the second data message, adding the second data message into the first pipeline.

For each pipeline, if the output interface data corresponding to the hash value of the pipeline is characterized as being used for forwarding the data packet in the pipeline, forwarding the data packet in the pipeline through the output interface corresponding to the hash value of the pipeline, and deleting the data packet in the pipeline from the pipeline.

In this embodiment of the present application, after a first data packet in a first pipeline is deleted from the first pipeline, the first pipeline is changed into an idle state, and if a network device receives a second data packet or a second data packet that does not match an output interface exists in the network device, the second data packet is added to the first pipeline, and a stage of the first pipeline is set as a root node of a decision tree.

By adopting the embodiment of the application, the N first data messages can be matched through the N pipelines, after the N first data messages are added into the N pipelines, the first hash value of the first pipeline in the N pipelines can be calculated, the first output interface data corresponding to the first hash value is asynchronously prefetched from the memory, the first output interface data is stored in the cache, and the second hash value of the second pipeline in the N pipelines is calculated while the first output interface data is prefetched from the memory. Therefore, when the first output interface data corresponding to the first hash value needs to be obtained, the memory does not need to be accessed, the first output interface data can be directly obtained from the cache, and the time required by message matching is reduced. And if the first output interface data is characterized in that the first output interface data is used for forwarding the first data message in the first pipeline, deleting the first data message in the first pipeline from the first pipeline, and adding the second data message into the first pipeline for processing. Compared with the prior art that the received second data message can be processed after the first data messages in the N pipelines are all forwarded, in the embodiment of the application, the second data message can be added into the pipeline and starts to be processed as long as the data message is deleted from the pipeline, and the matching and forwarding speed of the received data message can be accelerated.

The embodiments of the present application do not limit the storage structure of the HASH FIB in the memory. As an example, in one embodiment of the present application, FIB digests are stored in HASH buckets (buckets) of a HASH table, each Bucket being a 64Bytes storage space, 8 FIB digests may be stored, each FIB digest including a TAG (TAG) for calculating a HASH value according to IPv6Addr and prefix len, and a data pointer (DATAPointer) for indicating a storage location of an actual FIB ENTRY.

Based on the above storage structure, taking the case that the current network device receives two first data packets and two pipelines are provided in the CPU of the network device, assuming that the received packet1 and the received packet2 have destination addresses of IPv6Addr1 and IPv6Addr2, respectively, add the packet1 to the pipeline1, and add the packet2 to the pipeline 2. The root node of the decision tree is Prefix Len 1. The above process can be expressed as:

HashValue1 ═ Hash (IPv6Addr1, prefix len 1); calculating the hash value of IPv6Addr1 and Prefix Len1 to obtain HashValue 1;

prefetch (& Bucket [ HashValue1 ]); // hint the CPU to be ready to access a Bucket including HashValue1, so that the CPU prefetches the FIB digest in the Bucket [ HashValue1 ];

HashValue2 ═ Hash (IPv6Addr2, prefix len 1); ) (ii) a Calculating the hash value of IPv6Addr2 and Prefix Len1 to obtain HashValue 2;

prefetch (& Bucket [ HashValue2])// hint that the CPU is about to access a Bucket including HashValue2, so that the CPU prefetches the FIB digest in the Bucket [ HashValue2 ];

DataPointer1 ═ Bucket [ HashValue1 ]; the network equipment acquires the data in the Bucket [ HashValue1], and at the moment, the data in the Bucket [ HashValue1] is in the cache, so that the memory does not need to be accessed, and the phenomenon of data such as a CPU (central processing unit) and the like is avoided. After the network device acquires the FIB digest in the Bucket [ HashValue1], it may be determined whether the FIB digest has DataPointer1 corresponding to HashValue1, and if so, the network device may subsequently acquire the egress interface data corresponding to DataPointer 1.

Prefetcch (DataPointer 1); // prompt the CPU to access the DataPointer1 so that the CPU prefetches and caches the corresponding outbound interface data of DataPointer1 from memory.

DataPointer2 ═ Bucket [ HashValue2 ]; the network equipment acquires the data in the Bucket [ HashValue2], and at the moment, because the FIB abstract in the Bucket [ HashValue2] is in the cache, the memory does not need to be accessed, and the phenomenon of data such as a CPU (central processing unit) and the like cannot be caused. After the network device acquires the FIB digest in the Bucket [ HashValue2], it may be determined whether the Bucket has DataPointer2 corresponding to HashValue1, and if so, the network device may subsequently acquire the egress interface data corresponding to DataPointer 2.

Prefetcch (DataPointer 2); // prompt the CPU to access the DataPointer2 so that the CPU prefetches and caches the corresponding outbound interface data of DataPointer2 from memory.

Access DataPointer 1; if the output interface data corresponding to the DataPointer1 exists in the cache, the output interface data corresponding to the DataPointer1 does not need to access the memory, and the CPU does not wait.

Access DataPointer 2; if the output interface data corresponding to the DataPointer2 exists in the cache, the output interface data corresponding to the DataPointer2 does not need to access the memory, and the CPU does not wait.

Update IPv6Addr1 Stage & Output Interface; updating Stage and output interface of the pipeline to which the message of the IPv6Addr1 belongs;

update IPv6Addr2 Stage & Output Interface// Update Stage and Output Interface of the pipeline to which the message of IPv6Addr2 belongs.

Therefore, when the network device needs to acquire the data in the memory each time, the data to be accessed is acquired and cached in advance, so that the time consumed by the network device for accessing the memory when the data needs to be acquired can be saved, and the message matching speed can be increased.

In another embodiment of the present application, after the step S603 acquires the first output interface data from the buffer, the method further includes:

judging whether the right child node of the root node in the decision tree is empty or not; if yes, determining that the first output interface data is characterized as a first data message used for forwarding a first pipeline; if not, determining that the first output interface data is not characterized as being used for forwarding the first data message in the first pipeline.

For each node in the decision tree, the prefix length of the left child node of the node is smaller than the prefix length of the right child node. If the first outgoing interface data is obtained and the right child node of the root node is empty, it is indicated that no better matching exists in the decision tree, so that the first outgoing interface data can be determined to be characterized as being used for forwarding the first data message in the first pipeline, matching of the message to be forwarded is completed, and the first data message in the first pipeline is forwarded through the first outgoing interface data.

If the right child node of the root node is null, it indicates that there is a prefix length longer than the prefix length corresponding to the root node in the HASH FIB. According to the longest matching principle, it is further necessary to determine whether the first data packet in the first pipeline can hit the longer prefix length, so that the first data interface data may not be the optimal matching at this time, that is, the first outgoing interface data is not characterized as being used for forwarding the first data packet in the first pipeline.

In this embodiment of the present application, when the first egress interface data is not characterized to be used for forwarding the first data packet in the first pipeline, the stage of the first pipeline is updated to be the right child node of the root node, and the egress interface information in the first pipeline is updated to be the first egress interface data. Therefore, in the next round of matching process, the first data message in the first pipeline is matched with the updated prefix length represented by the stage.

In the next round of matching process, calculating the hash value of the prefix length represented by the first data message and the updated stage in the first pipeline, and asynchronously acquiring the output interface data corresponding to the hash value. And if the output interface data corresponding to the hash value is not acquired, the first output interface data is characterized as being used for forwarding the first data message in the first pipeline. And if the output interface data corresponding to the hash value is acquired, updating the output interface information in the first pipeline into the output interface data acquired this time.

In another embodiment of the present application, if the first output interface data corresponding to the first hash value is not obtained from the cache, the stage of the first pipeline is updated to the left child node of the root node.

If the first output interface data corresponding to the first hash value is not obtained from the cache, it is indicated that the first output interface data corresponding to the first hash value is not prefetched from the memory, that is, the first data packet in the first pipeline is not matched with the prefix length represented by the root node of the decision tree, and should be matched with the shorter prefix length in the next round of matching process, so that the stage of the first pipeline can be updated to the left child node of the root node.

It can be understood that, in this embodiment of the present application, after the first data packet in each pipeline is matched once, the stage in the pipeline needs to be updated once, taking the first pipeline as an example, the following specific cases are adopted:

in case one, first output interface data corresponding to a first hash value of a first pipeline is obtained from a cache, and the first output interface data is characterized in that the first output interface data is used for forwarding a first data packet in the first pipeline, and then a stage of the first pipeline is updated to complete matching.

And in the second situation, first output interface data corresponding to the first hash value of the first pipeline is obtained from the cache, and the first output interface data is not characterized as being used for forwarding the first data packet in the first pipeline, and then the stage of the first pipeline is updated to be the right child node of the current stage.

For example, if the current stage of the first pipeline is the root node, the stage of the first pipeline is updated to the right child node of the root node.

And in the third case, if the first output interface data corresponding to the first hash value of the first pipeline is not obtained from the cache, the stage of the first pipeline is updated to the left child node of the current stage.

For example, if the current stage of the first pipeline is the root node, the stage of the first pipeline is updated to be the left child node of the root node.

In the above S603, when the hash value of each pipeline in the N pipelines is completely calculated, the network device may sequentially obtain the output interface data corresponding to the hash value of each pipeline from the cache, and sequentially update the stage of the pipeline according to the condition of obtaining the output interface data corresponding to the hash value of each pipeline.

The method for updating the stage of each pipeline is the same as the method for updating the stage of the first pipeline described in the embodiment of the application.

After the stage of each of the N pipelines completes one update, the first data packet in the pipeline whose stage is matched can be deleted from the pipeline, so that other data packets received by the network device can be added to the pipeline, and the matching and forwarding efficiency of the received data packets can be improved.

It should be noted that, in one case, if there is no stage pipeline of the pipeline for completing matching after the stage of each of the N pipelines completes one update, the next round of matching is performed on the first data packet in the N pipelines. The method of performing the next round of matching is the same as the method of S602-S604 described above.

In another case, if other data messages received by the network device are added into the pipeline, and no idle pipeline exists at this time; or if no other data message to be forwarded exists in the network equipment, performing next round of matching on the data messages in the N pipelines.

Therefore, in the embodiment of the application, the N data messages can be matched through the N pipelines, when any one data message is matched, the data message is deleted from the pipeline, other messages to be forwarded are added into the pipeline, and the matching speed of the messages to be forwarded can be increased.

The process of the processing pipeline is described below using the decision tree shown in fig. 5c as an example.

The number of pipelines is assumed to be 2, that is, the maximum number of messages to be forwarded processed simultaneously is 2.

If there are 4 messages to be forwarded in the current network device, the messages are sorted according to the time sequence of the receiving time from morning to evening, and are Packet1, Packet2, Packet3 and Packet4 respectively. It is assumed that the prefix lengths that each message to be forwarded can hit are respectively:

packet1 hits Prefix 5;

packet2 hits Prefix 3;

packet3 hits Prefix 5;

pakcet 4 hits Prefix 2.

Assume that an Output interface (Output interface) corresponding to Prefix1 is 1, an Output interface corresponding to Prefix2 is 2, an Output interface corresponding to Prefix3 is 3, an Output interface corresponding to Prefix4 is 4, and an Output interface corresponding to Prefix5 is 5.

First, Packet1 and Packet2 are added to the pipeline, respectively, and the pipeline is as shown in table 6:

TABLE 6

After one pass of processing (advance operation) on the pipeline, the destination address of Packet1 matches with Prefix5, and as can be seen from fig. 5c, the right child node of Prefix5 is empty, so Packet1 matching is completed, and Stage of pipeline1 is updated to matching completion (FINISH). And the destination address of Packet2 fails to match Prefix5, so Packet2 needs to match Prefix2, the next time, so Stage of pipeline2 is updated to 2. The pipelines at this time are shown in table 7:

TABLE 7

At this time, once processing on each Pipeline is completed, whether a Stage of the Pipeline exists is detected as matching completion, and as can be seen from table 7, Packet1 has completed matching, so that Packet1 can be deleted from Pipeline1, and Packet1 can be forwarded through egress interface 5.

After Packet1 is deleted from Pipeline1, Pipeline1 becomes idle, and Packet3 may be added to Pipeline 1. Packet2 is still retained in Pipeline2 because Packet2 has not yet matched. The pipelines at this time are shown in table 8:

TABLE 8

After a further treatment of the lines, each line is as shown in table 9:

TABLE 9

From table 9, it can be seen that Packet3 has been matched, so Packet3 was deleted from Pipeline1 and Packet4 was added to Pipeline 1. The pipelines at this time are shown in table 10:

watch 10

After a further treatment of the lines, each line is as shown in table 11:

TABLE 11

After a further treatment of the lines, each line is as shown in table 12:

TABLE 12

At this time, Packet2 completes matching, Packet2 is deleted from Pipeline2, at this time, there is no newly received Packet to be forwarded in the network device, so Pipeline2 is empty, and after Pipeline1 is processed again, each Pipeline is as shown in table 13.

Watch 13

After further processing, the flow lines are shown in Table 14.

TABLE 14

At this time, the processing of Packet1, Packet2, Packet3, and Packet4 is completed.

Corresponding to the foregoing method embodiment, an embodiment of the present application further provides a packet matching apparatus, as shown in fig. 7, where the apparatus includes:

a setting module 701, configured to add N first data packets to N pipelines, and set a stage of each pipeline as a root node of a decision tree, where each node in the decision tree represents a prefix length and the prefix lengths represented by each node are different;

a prefetch module 702, configured to calculate a first hash value of a first pipeline of the N pipelines, asynchronously prefetch first output interface data corresponding to the first hash value from the memory, store the first output interface data in the cache, calculate a second hash value of a second pipeline of the N pipelines while prefetching the first output interface data from the memory, repeatedly execute a process of asynchronously prefetching output interface data corresponding to the hash value from the memory, store the output interface data in the cache, calculate hash values of the pipelines while prefetching the interface data from the memory, and stop until the hash values of each pipeline of the N pipelines are completely calculated; the hash value of the pipeline is the hash value of the destination address of the data message in the pipeline and the prefix length represented by the stage;

an obtaining module 703, configured to obtain first output interface data from the cache when the hash value of each of the N pipelines is completely calculated;

the setting module 701 is further configured to delete the first data packet in the first pipeline from the first pipeline when the first outgoing interface data is characterized to be used for forwarding the first data packet in the first pipeline, and add the second data packet to the first pipeline when the second data packet is received.

Optionally, the apparatus further comprises:

and the updating module is used for updating the stage of the first pipeline into the right child node of the root node and updating the output interface information in the first pipeline into the first output interface data when the first output interface data is not characterized to be used for forwarding the first data message in the first pipeline.

Optionally, the apparatus further comprises: a determination module configured to:

judging whether the right child node of the root node in the decision tree is empty or not;

if so, determining that the first output interface data is characterized as being used for forwarding a first data message in a first pipeline;

if not, determining that the first output interface data is not characterized as being used for forwarding the first data message in the first pipeline.

Optionally, the apparatus further comprises:

and the updating module is used for updating the stage of the first pipeline into the left child node of the root node if the first output interface data corresponding to the first hash value is not obtained from the cache.

Optionally, the apparatus further comprises:

the updating module is used for updating the stage of the first pipeline to complete matching when the first output interface data is characterized to be used for forwarding the first data message in the first pipeline;

the setting module 701 is specifically configured to delete the first data packet in the pipeline in which the stage is matched for completion from the pipeline after the stages of the N pipelines are all updated once.

The embodiment of the present application further provides a network device, as shown in fig. 8, which includes a processor 801, a communication interface 802, a memory 803, and a communication bus 804, where the processor 801, the communication interface 802, and the memory 803 complete mutual communication through the communication bus 804,

a memory 803 for storing a computer program;

the processor 801 is configured to implement the method steps in the above-described method embodiments when executing the program stored in the memory 803.

The communication bus mentioned in the network device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.

The communication interface is used for communication between the network device and other devices.

The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.

The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.

In another embodiment provided by the present application, a computer-readable storage medium is further provided, in which a computer program is stored, and the computer program, when executed by a processor, implements the steps of any of the above message matching methods.

In yet another embodiment provided by the present application, there is also provided a computer program product containing instructions that, when run on a computer, cause the computer to perform any of the message matching methods of the above embodiments.

In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.

It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.

All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, as for the apparatus embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.

The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

30页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:单载波调制方案的设备和方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!