Message processing method and device

文档序号:1601351 发布日期:2020-01-07 浏览:16次 中文

阅读说明:本技术 一种报文处理方法及其装置 (Message processing method and device ) 是由 刘南雁 范杰 赵强 于 2019-09-26 设计创作,主要内容包括:本发明涉及一种报文处理方法,所述方法包括:步骤S1:接收报文并将接收到的报文进行缓存;步骤S2:处理单元对报文进行处理;步骤S3:将经处理的报文进行缓存以发送到外部网络。本发明设置了预处理缓冲队列,提前进行报文预处理,从而在不影响报文处理速度的情况下根据报文的类型和时间等因素进行报文的定向缓冲,并直接使得队列和处理器集合相对应;能够根据待处理队列中等待处理的报文数量进行处理器集合的有限度动态调整,大大的提高了处理效率。(The invention relates to a message processing method, which comprises the following steps: step S1: receiving a message and caching the received message; step S2: the processing unit processes the message; step S3: the processed message is buffered for transmission to an external network. The invention sets a preprocessing buffer queue to preprocess the message in advance, thereby directionally buffering the message according to factors such as the type and time of the message and the like under the condition of not influencing the message processing speed and directly enabling the queue to correspond to a processor set; the method can dynamically adjust the limit of the processor set according to the number of the messages waiting for processing in the queue to be processed, and greatly improves the processing efficiency.)

1. A method for processing a packet, the method comprising:

step S1: receiving a message and caching the received message;

step S2: the processing unit processes the message;

step S3: the processed message is buffered for transmission to an external network.

2. The message processing method according to claim 1, wherein the first queue to be processed and the second queue to be processed are used for performing ingress buffering, the first queue to be processed is used for temporarily storing the messages, and the messages are stored into each second sub-queue of the second queue to be processed after being preprocessed.

3. The message processing method of claim 2, wherein the second sub-queue is associated with a type of processing of the message.

4. The message processing method according to claim 3, wherein the step S1 specifically is: storing the received message into a first queue to be processed; and preprocessing the message, and storing the preprocessed message into a second queue to be processed or an exit queue.

5. The message processing method according to claim 4, wherein the preprocessing of the message specifically comprises: judging the type of the message; and if the message does not need to be locally processed, directly sending the message to a port of a network node, otherwise, carrying out processing type prejudgment based on the input parameters of the classification model, and sending the message to a corresponding second sub-queue in a second queue to be processed.

6. The message processing method according to claim 5, characterized in that the messages in the first queue to be processed are preprocessed one by one.

7. A message processing apparatus for performing the message processing method of any of claims 1-6, the apparatus comprising: the system comprises a plurality of processing units, a main control unit, a first queue to be processed, a second queue to be processed, a fast routing unit, an exit queue and ports; wherein: the second queue to be processed comprises a plurality of second sub-queues;

the processing unit is used for processing the message; the processing units are divided into a plurality of processing unit sets, each second sub-queue corresponds to one processing unit set, and the processing unit set is used for processing the messages in the corresponding second sub-queue; the processing unit set has higher processing capacity on the message types in the corresponding second sub-queues than the processing capacity on the message types in other second sub-queues;

the exit queue is used for carrying out outflow caching on the processed messages, so that the processed messages do not occupy the internal or shared storage space of the processing unit;

the first queue to be processed is also used for being directly connected with the outlet queue so as to directly store the received message into the outlet queue;

the processing unit is divided into a plurality of processing unit sets; the storage sharing overhead and communication overhead between processing elements are different, the communication and sharing overhead between processing elements within a set is relatively small, and the communication and sharing overhead between processing elements between sets is relatively large.

8. The message processing apparatus of claim 7, wherein the partitioning is dynamically partitioning: the dynamic division is specifically as follows: and dividing the number of the processing units in the corresponding processing unit set according to the number of the messages to be processed in the second sub-queue, so that the number of the processing units is matched with the number of the messages to be processed.

9. The message processing device according to claim 8, wherein the length of the second sub-queue in the second pending queue is dynamically adjustable.

[ technical field ] A method for producing a semiconductor device

The present invention belongs to the field of communication technology, and in particular, to a message processing method and apparatus.

[ background of the invention ]

In the current communication network, there are many complex network devices, such as routers, network managers, switches, firewalls, and various servers. These devices support various network protocols, so as to realize interconnection and intercommunication between network elements. The establishment of the network tunnel based on the video network needs to be completed through the control of the signaling message. At present, a video networking terminal receives a signaling message from a video networking node server in a kernel mode, and then transmits the signaling message to the video networking terminal in a user mode through a data copy mode and a socket interface. And the video network terminal analyzes the signaling message in the user mode. With the rapid development of network technology, the demand for information content and data quantity is more and more, the speed is faster and faster, and when data messages are transmitted in a network, the phenomenon of instantaneous congestion often occurs in network equipment. In the process of transmitting the message in the network, the message forwarding method not only has a simple message forwarding function, but also needs message deep processing functions such as message checking, comparison, searching and the like. Therefore, higher requirements are put on network equipment, packet loss is possibly caused by a congestion phenomenon at a network equipment end, so that network transmission performance is influenced, the throughput rate of a system is reduced, finally, the user experience is greatly reduced, the resource allocation is more reasonable and more accurate due to limited system resources, and the resource allocation which is not optimized may cause overload of a certain task, so that unnecessary message loss is caused. Because hardware resources are limited, the allocation of resources is often required to be more reasonable and more accurate, and the resource allocation which is not optimized may cause overload of a certain task to cause unnecessary message loss; conflicts may arise when different execution units access the i/o queue at the same time. How to improve the message processing speed under the condition of limited hardware resources under the condition of gradually generating contradiction between the development speed of hardware technology and the requirement of a user on network data is a problem to be solved at present. The invention sets a preprocessing buffer queue to preprocess the message in advance, thereby directionally buffering the message according to factors such as the type and time of the message and the like under the condition of not influencing the message processing speed and directly enabling the queue to correspond to a processor set; meanwhile, by means of corresponding queues and sets, the limited degree of processor set dynamic adjustment can be carried out according to the number of messages waiting for processing in the queue to be processed, and the processing efficiency of each type of messages can be ensured on the premise of ensuring the use efficiency of the processor; and the quick mapping of the ports is carried out in the processing process through the quick routing table, so that the sending efficiency is greatly improved.

[ summary of the invention ]

In order to solve the above problems in the prior art, the present invention provides a method for processing a packet, where the method includes:

step S1: receiving a message and caching the received message;

step S2: the processing unit processes the message;

step S3: the processed message is buffered for transmission to an external network.

Furthermore, the first queue to be processed and the second queue to be processed are used for performing inflow cache, the first queue to be processed is used for temporarily storing messages, and the messages are stored in each second sub-queue of the second queue to be processed after being preprocessed.

Further, the second sub-queue is associated with a processing type of the packet.

Further, the step S1 is specifically: storing the received message into a first queue to be processed; and preprocessing the message, and storing the preprocessed message into a second queue to be processed or an exit queue.

Further, the preprocessing of the packet specifically includes: judging the type of the message; and if the message does not need to be locally processed, directly sending the message to a port of a network node, otherwise, carrying out processing type prejudgment based on the input parameters of the classification model, and sending the message to a corresponding second sub-queue in a second queue to be processed.

Further, the messages in the first queue to be processed are preprocessed one by one;

a message processing apparatus, the apparatus comprising: the system comprises a plurality of processing units, a main control unit, a first queue to be processed, a second queue to be processed, a fast routing unit, an exit queue and ports; wherein: the second queue to be processed comprises a plurality of second sub-queues;

the processing unit is used for processing the message; the processing units are divided into a plurality of processing unit sets, each second sub-queue corresponds to one processing unit set, and the processing unit set is used for processing the messages in the corresponding second sub-queue; the processing unit set has higher processing capacity on the message types in the corresponding second sub-queues than the processing capacity on the message types in other second sub-queues;

the exit queue is used for carrying out outflow caching on the processed messages, so that the processed messages do not occupy the internal or shared storage space of the processing unit;

the first queue to be processed is also used for being directly connected with the outlet queue so as to directly store the received message into the outlet queue;

the processing unit is divided into a plurality of processing unit sets; the storage sharing overhead and communication overhead between processing elements are different, the communication and sharing overhead between processing elements within a set is relatively small, and the communication and sharing overhead between processing elements between sets is relatively large.

Further, the dividing is dynamically divided: the dynamic division is specifically as follows: and dividing the number of the processing units in the corresponding processing unit set according to the number of the messages to be processed in the second sub-queue, so that the number of the processing units is matched with the number of the messages to be processed.

Further, the length of the second sub-queue in the second pending queue is dynamically adjustable.

The beneficial effects of the invention include: a preprocessing buffer queue is arranged to carry out message preprocessing in advance, so that the message is directionally buffered according to factors such as the type and time of the message without influencing the message processing speed, and the queue and a processor set are directly corresponding to each other; meanwhile, by means of corresponding queues and sets, the limited degree of processor set dynamic adjustment can be carried out according to the number of messages waiting for processing in the queue to be processed, and the processing efficiency of each type of messages can be ensured on the premise of ensuring the use efficiency of the processor; and the quick mapping of the ports is carried out in the processing process through the quick routing table, so that the sending efficiency is greatly improved.

[ description of the drawings ]

The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, and are not to be considered limiting of the invention, in which:

fig. 1 is a schematic diagram of a message processing method according to the present invention.

Fig. 2 is a schematic diagram of a message processing apparatus according to the present invention.

[ detailed description ] embodiments

The present invention will now be described in detail with reference to the drawings and specific embodiments, wherein the exemplary embodiments and descriptions are provided only for the purpose of illustrating the present invention and are not to be construed as limiting the present invention.

A detailed description is given of a message processing method applied in the present invention, as shown in fig. 1, the method includes:

step S1: receiving a message and caching the received message; specifically, the method comprises the following steps: storing the received message into a first queue to be processed; preprocessing the message, and storing the preprocessed message into a second queue to be processed or an exit queue;

the network node receives the message and performs inflow cache, processing and outflow cache on the received message; the network node is provided with a plurality of processing units, a main control unit, a first queue to be processed, a second queue to be processed, a fast routing unit, an outlet queue and a port; wherein: the second queue to be processed comprises a plurality of second sub-queues;

preferably: the main control unit is one or more than one processing unit; the main control unit is used for carrying out control operations such as message preprocessing and the like;

the fast routing unit is used for sending the message from the exit queue to a port of the network node; the port is used for connecting with an external network and sending the message to the external network through the port;

the first queue to be processed and the second queue to be processed are used for inflow caching, the first queue to be processed is used for temporarily storing messages, and the messages are stored in each second sub-queue of the second queue to be processed after being preprocessed; the second sub-queue is related to the processing type of the message and is used for storing the message of the corresponding processing type; the second sub-queue is coupled with the processing unit to send the message to the corresponding processor set;

the processing unit is used for processing the message; the processing units are divided into a plurality of processing unit sets, each second sub-queue corresponds to one processing unit set, and the processing unit set is used for processing the messages in the corresponding second sub-queue; the processing unit set has higher processing capacity on the message types in the corresponding second sub-queues than the processing capacity on the message types in other second sub-queues; that is, the processing units in the processing unit set can also process other types of messages, but the processing capability for the corresponding type of message is higher than that for other types of messages; in the prior art, a received message is often simply processed in a k-in-k-out mode, the number of queues and the length of the queues are fixed and cannot be adapted to the type of the message and the time law of the occurrence of the message, so that the resource utilization efficiency of message processing is very low; by the method, the message congestion degree can be directly monitored based on the queue available length so as to dynamically adjust the corresponding processing unit set; such dynamic adjustment may be based on a time period, and/or based on real-time parameters;

the exit queue is used for performing outflow caching on the processed messages, so that the processed messages do not occupy the storage space inside or shared by the processing units any more, and loss and congestion which may be caused are avoided;

preferably: the first queue to be processed is also used for being directly connected with the outlet queue so as to directly store the received message into the outlet queue;

the preprocessing of the message specifically comprises the following steps: judging the type of the message; if the message does not need to be locally processed, the message is directly sent to a port of a network node, otherwise, the processing type is pre-judged based on the input parameters of the classification model, and the message is sent to a corresponding second sub-queue in a second queue to be processed; preferably, the input parameters are the type of the message, the size of the message, the sending time of the message and the like; the type of the network message and the occurrence time of the network message are closely related, but the factor is not considered in the message processing in the prior art, and the message processing efficiency is limited due to the fact that the judgment is carried out on the attributes of other messages, which are not time factors, of the message at the current moment; if the time information is taken into consideration, the efficiency of processing and classifying can be greatly improved; the message type comprises whether processing is needed or not, the processing is bias calculation, the processing is bias storage, the processing is streaming processing, the processing of the message is a processing flow of a specific type and the like; that is, the processing type is not directly determined according to the type of the packet, and only whether further processing is required can be determined according to the type of the packet;

preferably: preprocessing the messages in the first queue to be processed one by one; sequentially acquiring message identifiers, and judging message types based on the identifiers; directly storing the message types which do not need to be processed into an exit queue; the processing type is pre-judged for other message types, and the message is stored in a corresponding second sub-queue of the second queue to be processed according to the processing type obtained by pre-judgment; the mark of the message comprises a part for indicating the type; the portion for indicating a type is extensible; the message type and the processing type are not in one-to-one correspondence;

the pre-judging of the processing type based on the input parameters specifically comprises the following steps: using one or more of the type, the size and/or the sending time of the message as input parameters, inputting a rough classification model for classification to obtain a dichotomous classification for indicating whether the processing classification is a first processing classification or a non-first processing classification; before the first processing classification is used, sample data containing input parameters and corresponding coarse classification results are used for training a coarse classification model so that the coarse classification model has certain classification capability; preferably: the output of the coarse classification model is zero or one, which respectively represents the first processing classification or the non-first processing classification; under the condition that the classification result of the rough classification model is not the first processing classification, inputting the input parameters into the fine classification model for classification to obtain a second processing classification; the second classification result is used for indicating that the classification result is one of a plurality of processing types; the number of the plurality of processing types indicated by the second classification result is added to be equal to the number of the second sub-queues; the first processing classification is the processing classification which accounts for the largest proportion in the message types; although the messages have various types, the processing classification of the important proportion is roughly divided, the rough division speed is very high, and the classification efficiency is greatly improved by the method of serially connecting and matching the rough classification models; the classification effect is greatly improved by the matching of the two-part thickness classification and the processing classification with the largest proportion; preferably: the first processing classification is an atypical classification; the typical classification is classified into a specific second sub-queue, and the general messages without the typical characteristics are not specially classified and are directly screened into a general processing unit set through rough classification;

the storing of the preprocessed message into the second queue to be processed or the exit queue specifically includes: judging the type of the message; directly storing the message which does not need to be processed into an exit queue; for other types of messages, pre-judging the processing types through the classification model, and storing the messages into corresponding second sub-queues of a second queue to be processed according to the processing types obtained through pre-judgment to wait for the next processing;

step S2: the processing unit processes the message; specifically, the method comprises the following steps: when the processing unit is in an idle state, acquiring a message from the second queue to be processed and processing the message; the processing units in the processing unit set independently or cooperatively process the message, and when the processing units are idle, the processing units directly go to the second sub-queue to obtain the message to be processed without being controlled by the main control unit;

the network node comprises a plurality of processing units, the processing units share a storage space and are in communication connection, and the plurality of processing units are partially identical and partially different; that is, the processing units appear heterogeneous as a whole, but appear locally as many of the same processing units; for example: having 4 processing units, two of type a and two of type B;

the processing unit is divided into a plurality of processing unit sets; the storage sharing overhead and the communication overhead between the processing units are different, the communication and sharing overhead between the processing units in the sets is relatively small, and the communication and sharing overhead between the processing units in the sets is relatively large;

the division is dynamic division: the dynamic division is specifically as follows: dividing the number of the processing units in the corresponding processing unit set according to the number of the messages to be processed in the second sub-queue, so that the number of the processing units is matched with the number of the messages to be processed; correspondingly: the length of a second sub-queue in the second queue to be processed can be dynamically adjusted; preferably: the length of the second sub-queue is dynamically adjusted based on the time characteristic of the message and the number of the messages to be processed in the current second sub-queue; the time characteristic of the message indicates the condition that the message type corresponding to the second sub-queue reaches the network node along with the change of time;

the different processing units have different architectures and different processing capabilities for different types of messages; the processing units are substantially identical between the same set of processing units, and the processing units in the same set of processing units are substantially identical; the corresponding processing units can be organized according to the type of the message; the processing unit is basically used for the message type which is good for processing by the processor, when the message quantity of the message type which is good for processing is small and the messages to be processed of other types are more, the dynamic division of the processing unit can be carried out, so that the processing unit in the processing unit is divided into other processing unit sets; at this time, although the processing unit which is divided has weak processing capability on other types of messages, the backlog situation of other types of messages to be processed can still be relieved; the division principle is that the communication overhead and the storage overhead between the newly added processing unit and other processing units are within an acceptable range as much as possible; for example: within a set threshold range;

preferably: the dynamic partitioning is a dynamic partitioning on a basic partitioning basis; specifically, the method comprises the following steps: initially, dividing the processing capacity of a processing unit and the architecture of a network unit into a plurality of processing unit sets; on the basis, the dynamic division is that the processing units contained in the processing unit set are dynamically adjusted according to the number of the messages to be processed in the second queue to be processed, so that the number of the processing units in the processing unit set is matched with the number of the messages to be processed corresponding to the number of the processing units in the processing unit set; the architecture of the network unit comprises a communication mode and a storage sharing mode;

preferably: the processing unit set is provided with a general processing unit set, the processing units in the general processing unit set have equal processing capacity to various messages, and when dynamic division is carried out, the processing units in the general processing unit set are divided into and out preferentially;

preferably: delivering the messages classified by atypical processing to the general processing unit set for processing;

the obtaining and processing of the message from the cache specifically includes: acquiring a message from a second queue to be processed and a second sub-queue corresponding to a processing unit set to which the processing unit belongs, and processing the message;

the processing unit set has higher processing capacity on the messages in the corresponding second sub-queues than the processing capacity on the messages in other second sub-queues;

step S3: caching the processed message to send to an external network; specifically, the method comprises the following steps: after the processing unit finishes processing the message, storing the message into an exit queue, and sending the message in the exit queue to different ports through the query fast routing unit so as to send the message to an external network;

because the network node is provided with a plurality of ports, messages can be sent in parallel; through the cache of the exit queue, the message does not occupy the storage space of the processing unit after being processed; the fast routing unit selects a port for sending the message based on the message sending information according to the message sending information obtained by processing the message, so that the sending efficiency of the message to a sending target is highest;

preferably: filling the fast routing unit in the process of processing the message by the processing unit;

preferably: the sending processing of the messages in the exit queue is out of order; when the port selected for sending is in an unavailable state, skipping the processing of the current message so as to continue routing and sending the subsequent message; in order to avoid that some messages are skipped all the time, the number of skipping times can be limited or the skipped messages can be placed in a temporary cache so as to make the processing priority of the messages in the temporary cache highest, so that the messages can be sent and processed;

the above description is only a preferred embodiment of the present invention, and all equivalent changes or modifications of the structure, characteristics and principles described in the present invention are included in the scope of the present invention.

8页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:基于MAC划分VLAN的工作组间流量调度装置及方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!