Data information processing method and server

文档序号:1819815 发布日期:2021-11-09 浏览:10次 中文

阅读说明:本技术 数据信息处理的方法和服务器 (Data information processing method and server ) 是由 吴情彪 叶志钢 于 2021-08-03 设计创作,主要内容包括:本发明提供了数据信息处理的方法和服务器,该方法包括:网卡将第一数据段保存至数据段缓存区,第一数据段包括第一封装数据和初始数据;第一进程解析第一封装数据以获取其中的第一vni并保存至预留段缓存区;第二进程分别从数据段缓存区和预留段缓存区提取初始数据和第一vni,并将据此确定出的nat内层数据头信息和第二vni保存至预留段缓存区;第一进程从预留段缓存区提取第二vni,并将据此确定出的第二封装数据保存至所述预留段缓存区,使得初始数据、nat内层数据头信息和第二封装数据共同构成第二数据段。该方案通过开辟预留段缓存区来保存、数据信息处理、以及避免破坏第一数据段,提高了服务器的工作效率。(The invention provides a data information processing method and a server, wherein the method comprises the following steps: the network card stores a first data segment to a data segment cache region, wherein the first data segment comprises first encapsulation data and initial data; the first process analyzes the first encapsulation data to obtain first vni in the first encapsulation data and stores the first vni in a reserved section cache region; the second process extracts initial data and the first vni from the data segment cache region and the reserved segment cache region respectively, and stores the determined nat inner layer data header information and the second vni to the reserved segment cache region; and the first process extracts the second vni from the reserved segment cache region and stores the determined second encapsulated data into the reserved segment cache region, so that the initial data, nat inner layer data header information and the second encapsulated data jointly form a second data segment. The scheme saves, processes data information and avoids damaging the first data segment by opening up the reserved segment cache region, thereby improving the working efficiency of the server.)

1. A method for processing data information is applied to a server, the server comprises a network card, a first process and a second process, the network card comprises a cache area, the cache area comprises a data segment cache area and a reserved segment cache area, and the method for processing the data information comprises the following steps:

the network card receives a first data segment, and stores the first data segment to the data segment cache region, wherein the first data segment comprises first encapsulated data and initial data, and the first encapsulated data comprises first vni;

the first process analyzes the first encapsulated data from the first data segment, acquires the first vni, and stores the first vni to the reserved segment buffer area;

the second process extracts the initial data from the data segment cache region, extracts the first vni from the reserved segment cache region, and determines corresponding nat inner layer data header information and second vni according to the initial data and the first vni;

the second process stores the nat inner layer data header information and the second vni to the reserved section cache region;

and the first process extracts the second vni from the reserved segment cache region, determines second encapsulated data according to the second vni, and stores the second encapsulated data in the reserved segment cache region, so that the initial data in the data segment cache region, the nat inner layer data header information in the reserved segment cache region, and the second encapsulated data together form a second data segment.

2. The method according to claim 1, wherein the step of the second process extracting the initial data from the data segment buffer, extracting the first vni from the reserved segment buffer, and determining corresponding nat inner layer header information and second vni according to the initial data and the first vni includes:

the second process extracts an initial inner-layer data header from the initial data and extracts the first vni from the reserved segment buffer;

the second process searches corresponding nat IP, nat PORT and second vni in the nat conversion table and the vni conversion table respectively according to the initial inner layer data header and the first vni;

and the second process determines nat inner layer data header information according to the nat IP and the nat PORT.

3. The method according to claim 1, wherein the step of the first process parsing the first encapsulated data from the first data segment and obtaining the first vni, and saving the first vni to the reserved segment buffer is preceded by the step of:

the network card sends a pointer of the first data segment to the first process;

the first process accesses the first data segment according to the pointer of the first data segment.

4. The method according to claim 1, wherein the steps of the second process extracting the initial data from the data segment buffer, extracting the first vni from the reserved segment buffer, and determining the corresponding nat inner layer header information and second vni according to the initial data and the first vni are preceded by:

the network card sends a pointer of the initial data to the second process;

the second process accesses the initial data and the first vni according to the pointer of the initial data.

5. The method according to claim 1, wherein the step of the first process extracting the second vni from the reserved segment buffer, determining second encapsulated data according to the second vni, and storing the second encapsulated data in the reserved segment buffer, so that the initial data located in the data segment buffer, the nat inner layer data header information located in the reserved segment buffer, and the second encapsulated data together form a second data segment, is followed by the step of:

the network card determines a corresponding sending tunnel according to the second data segment;

and the network card sends the second data segment according to the sending tunnel.

6. The method according to claim 1, wherein the server further includes a third process, the first encapsulated data further includes a source IP address and a destination IP address, the network card receives a first data segment, and stores the first data segment in the data segment buffer, the first data segment includes first encapsulated data and initial data, and the step of the first encapsulated data including first vni includes:

the first process analyzes the first encapsulated data from the first data segment, acquires the source IP address and the destination IP address, and stores the source IP address and the destination IP address to the reserved segment cache region;

the third process extracts the source IP address and the destination IP address from the reserved segment cache region and judges whether the first data segment is a legal data header according to the source IP address and the destination IP address;

if the first data segment is a legal data header, the third process processes the initial data;

and if the first data segment is not a legal data header, the third process discards the initial data.

7. The method according to claim 1, wherein the server further includes a fourth process, the network card further includes a mirror buffer, the mirror buffer includes a mirror data segment buffer and a mirror reserved segment buffer, the network card receives a first data segment and stores the first data segment in the data segment buffer, the first data segment includes first encapsulated data and initial data, and after the step of the first encapsulated data including first vni, the method includes:

the first process analyzes the first encapsulated data from the first data segment and acquires the first vni, and judges whether a user of the first data segment opens a virus detection service according to the first vni;

if the user of the first data segment opens the virus detection service, the first process stores the first data segment to the mirror image data segment cache region and stores the first vni to the mirror image reserved segment cache region;

and the fourth process extracts the first data segment and the first vni from the mirror image cache region and generates a corresponding virus detection report according to the rule of the virus detection service.

8. A server is characterized by comprising a network card, a first process and a second process, wherein the network card comprises a cache region, and the cache region comprises a data segment cache region and a reserved segment cache region;

the network card receives a first data segment, and stores the first data segment to the data segment cache region, wherein the first data segment comprises first encapsulated data and initial data, and the first encapsulated data comprises first vni;

the first process analyzes the first encapsulated data from the first data segment, acquires the first vni, and stores the first vni to the reserved segment buffer area;

the second process extracts the initial data from the data segment cache region, extracts the first vni from the reserved segment cache region, and determines corresponding nat inner layer data header information and second vni according to the initial data and the first vni;

the second process stores the nat inner layer data header information and the second vni to the reserved section cache region;

and the first process extracts the second vni from the reserved segment cache region, determines second encapsulated data according to the second vni, and stores the second encapsulated data in the reserved segment cache region, so that the initial data in the data segment cache region, the nat inner layer data header information in the reserved segment cache region, and the second encapsulated data together form a second data segment.

9. The server according to claim 8, wherein the second process is further configured to extract an initial inner-layer data header from the initial data and extract the first vni from the reserved segment buffer; and

and the second process is used for respectively searching corresponding nat IP and second vni in the nat conversion table and the vni conversion table according to the initial inner layer data header and the first vni.

10. The server of claim 8, wherein the network card is further configured to send a pointer to the first data segment to the first process; and

the first process is further configured to access the first data segment according to the pointer of the first data segment.

Technical Field

The invention relates to the technical field of communication, in particular to a data information processing method and a server.

Background

VXLAN (Virtual eXtensible local area network) encapsulates data packets sent by a Virtual machine in UDP, and encapsulates the data packets by using an IP/MAC address of a physical network as an outer layer, and represents the network as parameters after encapsulation, thereby greatly reducing the requirement of a two-layer network on the specification of the MAC address.

Multiple processes in the same server need to execute different instructions on the data segment of the network card to obtain corresponding multiple target information, and each target information cannot be directly acquired by other processes, wherein at least two processes need to execute the same instruction to acquire corresponding at least two target information to execute subsequent instructions, that is, the same instruction in the server needs to be executed by at least two processes, so that the working efficiency of the server is reduced.

Therefore, it is necessary to provide a method of data information processing and a server that can improve the work efficiency of the server.

Disclosure of Invention

The embodiment of the invention provides a method and a server for processing data information, wherein a reserved segment cache region is opened in a cache region of a network card, a first process peels off a first vni in a first data segment and stores the first vni in the reserved segment cache region, and a second process can directly share the first vni in the reserved segment cache region to determine corresponding nat information and a second vni; the problem that the working efficiency of the server is low due to the fact that the second process needs to execute the same operation as the operation executed by the first process to obtain corresponding information at present is solved.

The embodiment of the invention provides a data information processing method, which is applied to a server, wherein the server comprises a network card, a first process and a second process, the network card comprises a cache region, the cache region comprises a data segment cache region and a reserved segment cache region, and the data information processing method comprises the following steps:

the network card receives a first data segment, and stores the first data segment to the data segment cache region, wherein the first data segment comprises first encapsulated data and initial data, and the first encapsulated data comprises first vni;

the first process analyzes the first encapsulated data from the first data segment, acquires the first vni, and stores the first vni to the reserved segment buffer area;

the second process extracts the initial data from the data segment cache region, extracts the first vni from the reserved segment cache region, and determines corresponding nat inner layer data header information and second vni according to the initial data and the first vni;

the second process stores the nat inner layer data header information and the second vni to the reserved section cache region;

and the first process extracts the second vni from the reserved segment cache region, determines second encapsulated data according to the second vni, and stores the second encapsulated data in the reserved segment cache region, so that the initial data in the data segment cache region, the nat inner layer data header information in the reserved segment cache region, and the second encapsulated data together form a second data segment.

In an embodiment, the step of the second process extracting the initial data from the data segment buffer, extracting the first vni from the reserved segment buffer, and determining corresponding nat inner layer data header information and second vni according to the initial data and the first vni includes:

the second process extracts an initial inner-layer data header from the initial data and extracts the first vni from the reserved segment buffer;

the second process searches corresponding nat IP, nat PORT and second vni in the nat conversion table and the vni conversion table respectively according to the initial inner layer data header and the first vni;

and the second process determines nat inner layer data header information according to the nat IP and the nat PORT.

In an embodiment, before the step of parsing the first encapsulated data from the first data segment and acquiring the first vni, and storing the first vni in the reserved segment buffer, the first process includes:

the network card sends a pointer of the first data segment to the first process;

the first process accesses the first data segment according to the pointer of the first data segment.

In an embodiment, before the steps of extracting, by the second process, the initial data from the data segment buffer, extracting the first vni from the reserved segment buffer, and determining, according to the initial data and the first vni, corresponding nat inner layer data header information and second vni, the method includes:

the network card sends a pointer of the initial data to the second process;

and the second process determines the reserved segment buffer according to the pointer of the initial data and accesses the initial data and the first vni.

In an embodiment, after the step of extracting, by the first process, the second vni from the reserved segment buffer, determining second encapsulated data according to the second vni, and storing the second encapsulated data in the reserved segment buffer, so that the initial data located in the data segment buffer, the nat inner layer data header information located in the reserved segment buffer, and the second encapsulated data together form a second data segment, the method includes:

the network card determines a corresponding sending tunnel according to the second data segment;

and the network card sends the second data segment according to the sending tunnel.

In an embodiment, the server further includes a third process, where the first encapsulated data further includes a source IP address and a destination IP address, the network card receives a first data segment, and stores the first data segment to the data segment buffer, where the first data segment includes first encapsulated data and initial data, and after the step of the first encapsulated data including the first vni, the method includes:

the first process analyzes the first encapsulated data from the first data segment, acquires the source IP address and the destination IP address, and stores the source IP address and the destination IP address to the reserved segment cache region;

the third process extracts the source IP address and the destination IP address from the reserved segment cache region and judges whether the first data segment is a legal data header according to the source IP address and the destination IP address;

if the first data segment is a legal data header, the third process processes the initial data;

and if the first data segment is not a legal data header, the third process discards the initial data.

In an embodiment, the server further includes a fourth process, the network card further includes a mirror image cache region, the mirror image cache region includes a mirror image data segment cache region and a mirror image reserved segment cache region, the network card receives a first data segment and stores the first data segment to the data segment cache region, the first data segment includes first encapsulated data and initial data, and after the step of the first encapsulated data including first vni, the method includes:

the first process analyzes the first encapsulated data from the first data segment and acquires the first vni, and judges whether a user of the first data segment opens a virus detection service according to the first vni;

if the user of the first data segment opens the virus detection service, the first process stores the first data segment to the mirror image data segment cache region and stores the first vni to the mirror image reserved segment cache region;

and the fourth process extracts the first data segment and the first vni from the mirror image cache region and generates a corresponding virus detection report according to the rule of the virus detection service.

The embodiment of the invention provides a server, which comprises a network card, a first process and a second process, wherein the network card comprises a cache region, and the cache region comprises a data segment cache region and a reserved segment cache region;

the network card is used for receiving a first data segment and storing the first data segment to the data segment cache region, wherein the first data segment comprises first encapsulated data and initial data, and the first encapsulated data comprises first vni;

the first process is used for analyzing the first encapsulated data from the first data segment, acquiring the first vni, and storing the first vni to the reserved segment buffer area;

the second process is used for extracting the initial data from the data segment cache region, extracting the first vni from the reserved segment cache region, and determining corresponding nat inner layer data header information and second vni according to the initial data and the first vni;

the second process is further configured to store the nat inner layer data header information and the second vni to the reserved segment cache region;

the first process is further configured to extract the second vni from the reserved segment buffer, determine second encapsulated data according to the second vni, and store the second encapsulated data in the reserved segment buffer, so that the initial data located in the data segment buffer, the nat inner layer data header information located in the reserved segment buffer, and the second encapsulated data together form a second data segment.

In an embodiment, the second process is further configured to extract an initial inner layer data header from the initial data, and extract the first vni from the reserved segment buffer; and

and the second process is used for respectively searching corresponding nat IP and second vni in the nat conversion table and the vni conversion table according to the initial inner layer data header and the first vni.

In an embodiment, the network card is further configured to send a pointer of the first data segment to the first process; and

the first process is further configured to determine the data segment cache region according to the pointer of the first data segment, and access the first data segment.

The invention provides a data information processing method and a server.A cache region in a network card comprises a data segment cache region and a reserved segment cache region, a first process analyzes a first vni from a first data segment and stores the first vni to the reserved segment cache region; the second process extracts the first vni from the reserved segment cache region, and determines corresponding nat inner layer data header information and second vni according to the first vni; the second process stores the nat inner layer data header information and the second vni to the reserved section cache region; and the first process extracts the second vni from the reserved segment buffer to perform corresponding operation. In the scheme, a reserved section cache region is opened in a cache region in a network card, and first vni, nat inner layer data header information and second vni which are obtained by processing a first process and a second process are temporarily stored in the reserved section cache region, so that the first process and the second process can share information obtained by processing each other, and the situation that operations executed by the first process and the second process are partially repeated is avoided; and when the first process and the second process acquire the corresponding information in the first data segment and execute the corresponding operation, the integrity of the first data segment is still ensured, so that other processes can normally acquire the information of the first data segment, and the server is prevented from acquiring the first data segment from the outside again to supply for other processes. To sum up, this scheme has improved the work efficiency of server.

Drawings

The invention is further illustrated by the following figures. It should be noted that the drawings in the following description are only for illustrating some embodiments of the invention, and that other drawings may be derived from those drawings by a person skilled in the art without inventive effort.

Fig. 1 is a schematic view of a data information processing system according to an embodiment of the present invention;

fig. 2 is a schematic interval diagram of a cache area in a network card according to an embodiment of the present invention;

fig. 3 is a flowchart illustrating a first data information processing method according to an embodiment of the present invention;

fig. 4 is a schematic interval diagram of a buffer area in another network card according to an embodiment of the present invention;

fig. 5 is a schematic structural diagram of a first data segment according to an embodiment of the present invention;

FIG. 6 is a flowchart illustrating a second method for processing data information according to an embodiment of the present invention;

FIG. 7 is a flowchart illustrating a third method for processing data information according to an embodiment of the present invention;

FIG. 8 is a flowchart illustrating a fourth method for processing data information according to an embodiment of the present invention;

fig. 9 is a flowchart illustrating a fifth method for processing data information according to an embodiment of the present invention;

fig. 10 is a flowchart illustrating a sixth method for processing data information according to an embodiment of the present invention;

fig. 11 is a flowchart illustrating a seventh data information processing method according to an embodiment of the present invention;

fig. 12 is a schematic signaling interaction diagram of a method for processing data information according to an embodiment of the present invention;

fig. 13 is a schematic structural diagram of a first server according to an embodiment of the present invention;

fig. 14 is a schematic structural diagram of a second server according to an embodiment of the present invention;

fig. 15 is a schematic structural diagram of a third server according to an embodiment of the present invention;

fig. 16 is a schematic structural diagram of a fourth server according to an embodiment of the present invention.

Detailed Description

The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.

The terms "first", "second", and the like in the present invention are used for distinguishing different objects, not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or modules is not limited to the listed steps or modules but may alternatively include other steps or modules not listed or inherent to such process, method, article, or apparatus.

Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the invention. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.

The execution main body of the data information processing method provided by the embodiment of the invention can be the server provided by the embodiment of the invention or the electronic device integrated with the server, and the server can be realized in a hardware or software mode.

Some basic concepts involved in embodiments of the present invention are first described below.

Network card: a piece of computer hardware designed to allow computers to communicate over a computer network, so that users can connect to each other by cable or wirelessly. Each network card has a unique 48-bit serial number, called the MAC address, which is written in a ROM on the network card. The network card is not a separate autonomous unit because the network card itself does not have a power source but must use the power source of the inserted computer and be controlled by the computer. When the network card receives an erroneous frame, it discards the frame without having to notify the computer into which it is inserted. When the network card receives a correct frame, it uses an interrupt to notify the computer and deliver it to the network layer in the protocol stack. When the computer wants to send an IP data packet, it is sent to the network card by the protocol stack to be assembled into a frame and then sent to the local area network.

And (4) process: the processes are entities, each process has its own address space, and generally, the processes include a text region, a data region, and a stack; the text area stores code executed by the processor, the data area stores variables and dynamically allocated memory used during process execution, and the stack area stores instructions and local variables called by the active procedure. A process is also an "executing program", and a program is an entity without life, and the program can become an active entity only when a processor of an operating system gives life to the program, which is called a process.

A cache region: a data storage area shared by a plurality of hardware or program processes running at different speeds or priorities. The speed smoothing function is performed between the high-speed device and the low-speed device, data are temporarily stored, frequently accessed data can be placed in a buffer area, and the access to the low-speed device is reduced so as to improve the efficiency of the system.

The embodiment of the invention provides a data information processing method and a server. The details will be described below separately.

Referring to fig. 1, fig. 1 is a schematic view of a scenario of a data information processing system according to an embodiment of the present invention, where the data information processing system may include a network card 100, a first process 200, and a second process 300, where the network card includes a buffer area, and the buffer area includes a data segment buffer area and a reserved segment buffer area.

In this embodiment of the application, the buffer area is located in the network card 100, as shown in fig. 2, the network card 100 configures 2048 bytes of space for the buffer area, where each reference number represents a serial number of a corresponding byte in the buffer area, for example, "0" represents a 0 th byte, and "2047" represents a 2047 th byte. The first 1600 bytes are in a data segment buffer area for storing data segments, namely, the 0 th byte to the 1599 th byte are used for storing the data segments; furthermore, a reserved segment buffer area which is used for storing partial information of the data segment and is as long as 256 bytes from the 1600 th byte to the 2047 th byte can be selected. It should be noted that, after the data segment buffer and the reserved segment buffer are determined, if the data segment buffer is known, the reserved segment buffer may be determined according to a preset relative position between a first byte of the data segment buffer and a first byte of the reserved segment buffer.

A preset interval may be reserved between the reserved segment buffer and the data segment buffer to appropriately distinguish the data segment from the partial information stored in the data segment, for example, as shown in fig. 2, the reserved segment buffer may be an interval between 1663 rd byte and 1918 th byte in the buffer; or the reserved segment buffer area and the data segment buffer area can also be adjacently arranged, and the reserved segment buffer area is determined only according to the relative position of the first byte of the data segment buffer area and the first byte of the reserved segment buffer area which are preset. Of course, the space corresponding to the reserved segment buffer may be reasonably selected according to the length of the part of information in the stored data segment.

In this embodiment of the application, the network card 100 is mainly configured to receive a first data segment, and store the first data segment to the data segment buffer, where the first data segment includes first encapsulated data and initial data, and the first encapsulated data includes first vni; the first process is mainly used for analyzing the first encapsulated data from the first data segment, acquiring the first vni, and storing the first vni to the reserved segment buffer area; the second process is mainly used for extracting the initial data from the data segment cache region, extracting the first vni from the reserved segment cache region, determining corresponding nat inner layer data header information and second vni according to the initial data and the first vni, and storing the nat inner layer data header information and the second vni into the reserved segment cache region; the first process is further configured to extract the second vni from the reserved segment buffer, determine second encapsulated data according to the second vni, and store the second encapsulated data to the extension information, so that the initial data located in the data segment buffer, the nat inner layer data header information located in the reserved segment buffer, and the second encapsulated data together form a second data segment.

In this embodiment, the data information processing system may be included in a server, that is, the network card 100, the first process 200, and the second process 300 may all be included in a server. The server may be an independent server, or may be a server network or a server cluster composed of servers, for example, the server includes but is not limited to a computer, a network host, a single network server, a plurality of network server sets, or a cloud server composed of a plurality of servers. The cloud server is composed of a large number of computers or network servers based on cloud computing.

In this embodiment, the network card 100 may communicate with the outside world and the processes inside the server, for example, the network card 100 may receive or send data segments to the outside world, and the network card 100 may also send required data segment pointers to the processes inside the server, respectively; the first process 200 may perform an operation of decapsulating a data segment according to the data segment pointer, may also perform an operation of encapsulating a data segment according to corresponding information, and whether decapsulating or encapsulating, may store the obtained related information of the header to an area different from the area where the data segment is located; the second process 300 may obtain the related information of the header from the corresponding region, obtain the corresponding information according to the corresponding mapping rule, and store the corresponding information in a region different from the region where the data segment is located.

Further, the server may include a plurality of physical ports and a plurality of virtual ports. The physical ports may be included in the network card 100, and the physical ports are used to receive a data segment sent by a terminal or a bras (broadband access server), or used to send a data segment to a terminal or a bras (broadband access server). The network card 100, the first process 200, and the second process 300 may communicate with each other through the plurality of virtual ports. As shown in fig. 1, for example, when one of the physical ports of the network card 100 receives a data segment, the network card driver may send a "data segment pointer" to the first process 200 or the second process 300 and notify the first process 200 or the second process 300 to process the data segment, the first process 200 and the second process 300 may both send a "decapsulation/encapsulation data task completion instruction" and a "processing data segment information task completion instruction" to different virtual ports through the network card driver, so as to indicate that corresponding tasks have been completed, corresponding receiving and sending messages may be performed between different virtual ports through the network card driver, and the first process 200 may also notify the other physical port to send the data segment to the outside of the server.

In the embodiment of the present application, the terminal may be a general-purpose computer device or a special-purpose computer device. In a specific implementation, the terminal may be a desktop, a laptop, a network server, a Personal Digital Assistant (PDA), a mobile phone, a tablet computer, a wireless terminal device, a communication device, an embedded device, and the like, and the embodiment does not limit the type of the terminal.

Those skilled in the art will understand that the application environment shown in fig. 1 is only one application scenario related to the present embodiment, and does not constitute a limitation on the application scenario of the present embodiment, and that other application environments may further include more processes than those shown in fig. 1, for example, only 2 processes are shown in fig. 1, and it is understood that the data information processing system may further include one or more other processes that can access the network card 100, which is not limited herein.

It should be noted that the scenario diagram of data information processing shown in fig. 1 is only an example, and the system and the scenario of data information processing described in the embodiment of the present invention are for more clearly illustrating the technical solution of the embodiment of the present invention, and do not form a limitation on the technical solution provided in the embodiment of the present invention.

The embodiment of the invention provides a method for processing data information, the main execution body of the method for processing data information is the server, the server comprises a network card, a first process and a second process, the network card comprises a cache area, the cache area comprises a data segment cache area and a reserved segment cache area, and the method for processing data information comprises the following steps: the network card receives a first data segment, and stores the first data segment to the data segment cache region, wherein the first data segment comprises first encapsulated data and initial data, and the first encapsulated data comprises first vni; the first process analyzes the first encapsulated data from the first data segment, acquires the first vni, and stores the first vni to the reserved segment buffer area; the second process extracts the initial data from the data segment cache region, extracts the first vni from the reserved segment cache region, and determines corresponding nat inner layer data header information and second vni according to the initial data and the first vni; the second process stores the nat inner layer data header information and the second vni to the reserved section cache region; and the first process extracts the second vni from the reserved segment cache region, determines second encapsulated data according to the second vni, and stores the second encapsulated data in the reserved segment cache region, so that the initial data in the data segment cache region, the nat inner layer data header information in the reserved segment cache region, and the second encapsulated data together form a second data segment.

As shown in fig. 3, which is a schematic flowchart of an embodiment of a method for processing data information in an embodiment of the present invention, the method for processing data information includes:

s101, the network card receives a first data segment, and stores the first data segment to the data segment buffer area, wherein the first data segment comprises first encapsulation data and initial data, and the first encapsulation data comprises first vni.

In this embodiment, the network card may be the network card 100 shown in fig. 1, where the first data segment may be a data segment received by a physical port of the network card 100 and sent from a terminal or a bras (broadband access server).

As shown in fig. 4, the buffer area includes the data segment buffer area and a reserved segment buffer area, and the data segment buffer area may be located before the reserved segment buffer area, that is, the data segment buffer area may be located in a previous portion of the buffer area. Further, the first data segment may include the first encapsulated data and the initial data, the first encapsulated data precedes the initial data, and the first vni is included in the first encapsulated data. It should be noted that, in fig. 4, the division of the interval lengths of the first vni, the first encapsulation data, and the initial data is only for convenience of drawing, and the proportional relationship of the interval lengths of the first vni, the first encapsulation data, and the initial data is not limited.

Specifically, as shown in fig. 5, the first data segment may include the first encapsulated data and the initial data, which is described in detail as follows:

according to the distance from the initial data from near to far, the first encapsulated data may sequentially include a VXLAN header501, an Outer UDP header502, an Outer IP header503 and an Outer Ethernet header504, and further, according to the distance from the initial data from far to near, the VXLAN header501 includes VXLAN Flags505 and a VNI 506. The VNI is the first VNI in the above, where the first VNI (VNI) is a VXLAN network identifier, and is used to identify a tenant to which the first data segment belongs, where one tenant may have one or more VNIs, and tenants of different VNIs cannot directly perform two-layer mutual communication; wherein, VXLAN Flags is a flag bit, and includes 8 bits, and when the format is "RRRRIRRR", "I" bit is 1, it indicates that the first vni (vni) is valid, and is 0, it indicates that the first vni (vni) is invalid, and "R" bit is left unused and is set to 0; wherein, Reserved507 is also included between VXLAN Flags505 and VNI506, and between VNI506 and the initial data, for leaving unused, and is set to 0.

The initial data may sequentially include an Inner Ethernet header508, an Inner IP header509, an Inner TCP header601, and a Payload602 according to a distance from the first encapsulation data from the near to the far. The Inner Ethernet header comprises an MAC address of a sending end and an MAC address of a lan interface of the second process, the Inner IP header comprises an IP address of the sending end and an IP address of a receiving end, and the Inner TCP header comprises a port number of the sending end. The sending end and the receiving end correspond to the server, the terminal and the bras according to the actual condition of receiving and sending the first data segment; the Payload may include instruction information or data information.

S102, the first process analyzes the first encapsulated data from the first data segment, acquires the first vni, and stores the first vni to the reserved segment buffer area.

The first process may access the first data segment, parse the first encapsulated data in the first data segment, and obtain the first vni in the first encapsulated data according to a parsing result. Specifically, the first process may obtain VXLAN Flags information in the first encapsulated data, and for VXLAN Flags with a format of "RRRRIRRR", if the "I" bit is 1, the step S102 is executed, and if the "I" bit is 0, the step S102 is not executed.

It can be understood that, at this time, the initial data is stored in the data segment buffer, and the first vni is stored in the reserved segment buffer. As can be seen from the above analysis, a preset interval may be reserved between the reserved segment buffer and the data segment buffer to appropriately distinguish the data segment from part of the information in the saved data segment, and therefore, a preset interval may be reserved between the first vni and the initial data here, so that when the first vni is obtained at a later stage, on the premise that the reserved segment buffer is determined according to a preset relative position of a first byte of the data segment buffer and a first byte of the reserved segment buffer, this embodiment may further verify the first vni, for example, it may further determine whether there is a preset interval between the "first vni determined through the above steps" and the data segment buffer to determine whether the "determined first vni" is the true first vni.

It can be understood that, after the first process stores the first vni in the reserved segment buffer, the first process may send a relevant instruction, such as a "decapsulation data segment task completion instruction," to the network card, so as to inform the network card that the first process has completed relevant operations, such as decapsulation of a data segment, and thus, the network card performs a next operation. Meanwhile, the first process may obtain the pointer of the initial data at this time, and the first process may also send the pointer of the initial data to the network card.

S103, the second process extracts the initial data from the data segment cache region, extracts the first vni from the reserved segment cache region, and determines corresponding nat inner layer data header information and second vni according to the initial data and the first vni.

As can be seen from the above description, the first vni may be understood as a VXLAN network identifier of a sender sending the first data segment to the network card, where the sender may be a terminal or a bras. For example, when the terminal sends the first data segment to the network card, the first vni may be referred to as lan vni, where the first vni (lan vni) corresponds to an Outer UDP header, an Outer IP header, and an Outer Ethernet header in the first encapsulated data, and the second vni may be determined according to the first vni.

When the terminal sends the first data segment to the network card, the Inner IP header in the initial data comprises a private network IP address, and NAT IP information corresponding to the private network IP address and corresponding NAT PORT information can be obtained from a mapping relation established according to an NAT protocol. As can be seen from the above discussion, an Inner TCP header is further included between the Inner IP header and the Payload in the initial data, where the three of the Inner Ethernet header, the Inner IP header, and the Inner TCP header may be referred to as encapsulated data in the initial data, and may be abbreviated as an initial Inner data header, and the other protocol types may also be referred to as an Inner UDP header, an Inner ICMP header, and the like between the Inner IP header and the Payload. Further, the initial inner layer data header may be updated to be the nat inner layer data header according to the nat IP information and the nat PORT information.

It should be noted that, here, after or at the same time when the second process extracts the first vni from the reserved segment buffer, the first vni may also be stripped from the reserved segment buffer, that is, the first vni is removed, so as to avoid an influence on subsequent storage of new information in the reserved segment buffer.

And S104, the second process stores the nat inner layer data header information and the second vni to the reserved section cache region.

Specifically, the second process may also access the first data segment and "strip" the initial Inner header in the first data segment, where the initial Inner header includes an Inner Ethernet header, an Inner IP header, and an Inner TCP header, as discussed above for example, and it should be noted that, at this time, the second process may point a pointer to Payload in the initial data to "strip" the initial Inner header in the first data segment, rather than actually strip the initial Inner header in the first data segment. It can be understood that, at this time, the Payload in the initial data is stored in the data segment buffer, and the nat information and the second vni are stored in the reserved segment buffer. Similarly, a preset interval may be reserved between the second vni, the nat information, and the initial data, so that when the second vni and the nat inner layer data header information are acquired at a later stage, on the premise that the reserved segment buffer is determined according to a preset relative position of a first byte of the data segment buffer and a first byte of the reserved segment buffer, the second vni and the nat information may be further verified in this embodiment.

It can be understood that, after the second process completes the step S104, the second process may send a relevant instruction, such as a "data segment information task processing completion instruction", to the network card, so as to inform the network card that the second process has completed relevant operations, such as a data segment information task processing, so that the network card performs a next operation. Meanwhile, the second process may obtain the pointer of Payload in the initial data at this time, and the second process may also send the pointer of Payload in the initial data to the network card.

S105, the first process extracts the second vni from the reserved segment cache region, determines second encapsulated data according to the second vni, and stores the second encapsulated data in the reserved segment cache region, so that the initial data in the data segment cache region, the nat inner layer data header information in the reserved segment cache region, and the second encapsulated data together form a second data segment.

The second vni may determine a new outbound tunnel ID, and obtain a new corresponding Outer UDP header, an Outer IP header, and an Outer Ethernet header according to the second vni and the outbound tunnel ID, specifically, the first process may configure a new corresponding Outer UDP header, an Outer IP header, and an Outer Ethernet header for the second vni according to the second vni, and the second vni may also form a new corresponding VXLAN header, where the new corresponding Outer UDP header, the Outer IP header, the Outer Ethernet header, and the VXLAN header together form the second encapsulation data; specifically, the second encapsulated data may be stored in the reserved segment buffer area after the nat inner layer data header information.

It should be noted that, after or at the same time when the first process extracts the second vni from the reserved segment buffer, the second vni may also be stripped from the reserved segment buffer, that is, the second vni is removed, so as to avoid an influence on subsequent storage of new information in the reserved segment buffer.

According to the above analysis, it can be known that the nat inner layer data header information is the updated initial inner layer data header, that is, the initial inner layer data header has been converted into the nat inner layer data header information, and at this time, the pointer already points to Payload in the initial data; it should be noted that the fact that "the initial data located in the data segment buffer, the nat inner layer header information located in the reserved segment buffer, and the second encapsulation data collectively form a second data segment" mentioned in the step S105 actually means that: and the Payload in the initial data in the data segment cache region, the nat inner layer data header information in the reserved segment cache region and the second encapsulation data jointly form the second data segment.

It is understood that after the steps S101-S105, the first data segment still exists in the data segment buffer completely, i.e. the whole method step does not modify or destroy the first data segment. Therefore, the method of this embodiment may ensure the integrity of the first data segment while the first process and the second process acquire the corresponding information in the first data segment and perform the corresponding operation, so that other processes may normally acquire or process the information of the first data segment.

In this embodiment, as shown in fig. 6, before the step S102, the following steps may be included:

s201, the network card sends a pointer of the first data segment to the first process.

It can be understood that the first data segment is located in the data segment buffer area in the network card, so that, after the physical port of the network card receives the first data segment and stores the first data segment in the data segment buffer area, the network card may send a pointer of the first data segment to the first process, so as to notify the address of the data segment buffer area to the first process.

S202, the first process accesses the first data segment according to the pointer of the first data segment.

It can be understood that, when the first process acquires the pointer of the first data segment, that is, acquires the address of the data segment buffer, the first process may acquire the first data segment and perform the relevant operation of step S102 on the first data segment.

In this embodiment, as shown in fig. 7, before the step S103, the following steps may be included:

s301, the network card sends the pointer of the initial data to the second process.

As can be seen from the above analysis, after the step S102 is executed, the first process may send the pointer of the initial data to the network card, that is, the pointer of the initial data is also sent by the network card to the second process, so that the second process may be quickly located to the start position of the initial data, and at this time, only the initial data and the content located behind the initial data are visible to the second process, thereby avoiding interference of the content located before the initial data on the second process, and improving the work efficiency of the second process.

S302, the second process accesses the initial data and the first vni according to the pointer of the initial data.

It can be understood that, after the second process obtains the pointer of the initial data, the initial data may be obtained, and the second process searches backwards according to the interval where the initial data is located, and according to the above analysis, it may be known that the reserved segment buffer is determined according to the relative position of the preset first byte of the data segment buffer and the preset first byte of the reserved segment buffer, and then the reserved segment buffer may also be indirectly determined according to the pointer of the initial data, so as to obtain the first vni; and then performing the related operation of step S103 on the initial data and the first vni.

In this embodiment, as shown in fig. 8, the step S103 may include the following steps:

and S1031, the second process extracts an initial inner-layer data header from the initial data and extracts the first vni from the reserved segment cache region.

As can be seen from the above analysis, the first vni is saved to the reserved segment buffer by the first process, i.e. the second process can directly extract the first vni from the reserved segment buffer. It can be understood that, in this embodiment, the first vni obtained through parsing and obtaining by the first process may be stored in the reserved segment cache region, and may be directly extracted by the second process for use, that is, the step of "parsing and obtaining the first vni from the first data segment" also performed by the second process may be avoided, so that the work efficiency of the server is improved.

Wherein the initial Inner layer data header is an Inner IP header in the initial data in step S103, that is, the second process extracts the first vni from the reserved segment buffer according to the pointer of the initial data.

And S1032, the second process searches corresponding nat IP, nat PORT and second vni in the nat conversion table and the vni conversion table respectively according to the initial inner layer data header and the first vni.

The nat conversion table may be a private network IP-nat IP conversion table, each nat IP may correspond to a private network IP, that is, the corresponding nat IP and the corresponding nat PORT may be obtained in the nat conversion table according to the private network IP obtained by the initial inner layer data header.

Wherein each first vni in the vni conversion table may correspond to one second vni. Specifically, when the terminal sends the first data segment to the network card, the first vni may be referred to as lan vni, and the first vni (lan vni) corresponds to an Outer UDP header, an Outer IP header, and an Outer Ethernet header in the first encapsulated data; the second vni obtained through the vni conversion table may be referred to as wan vni, and further, new corresponding Outer UDP header, Outer IP header, and Outer Ethernet header may also be determined according to the second vni (wan vni).

And S1033, the second process determines nat inner layer data header information according to the nat IP and the nat PORT.

Specifically, for example, in the above discussion, the initial Inner layer data header may include an original Inner Ethernet header, an original Inner IP header, and an original Inner TCP header, taking the example that the terminal sends the first data segment to the server as follows: at this time, the MAC address of the sending end in the original Inner Ethernet header and the MAC address of the lan interface of the second process may be updated to the MAC address of the wan interface of the second process and the MAC address of the next hop device, respectively; the IP address of the sending end in the original Inner IP header can be modified into the nat IP; the PORT of the sending end in the original Inner TCP header can be modified into the nat PORT; and updating the initial inner layer data head to be the nat inner layer data head.

It is understood that the step S105 may include the following steps:

step one, the network card sends a pointer of Payload in the initial data to the first process.

As can be seen from the above analysis, in step S104, the second process obtains the pointer of the Payload in the initial data, and sends the pointer of the Payload in the initial data to the network card, and then the network card may also send the pointer of the Payload in the initial data to the first process. In this way, the first process can be quickly located to the starting position of the Payload in the initial data, and at this time, only the Payload in the initial data and the content behind the Payload in the initial data are visible to the first process, so that interference of the content before the Payload in the initial data on the first process is avoided, and the working efficiency of the first process is improved.

And step two, the first process determines to access the second vni according to the pointer of Payload in the initial data.

It can be understood that, after the first process obtains the pointer of Payload in the initial data, the first process searches backward according to the section where Payload in the initial data is located, and it can be known from the above analysis that the reserved section buffer is determined according to the relative position of the preset first byte of the data section buffer and the preset first byte of the reserved section buffer, and then the reserved section buffer can also be indirectly determined according to the pointer of Payload in the initial data, so as to obtain the second vni; the correlation operation of step S105 is then performed on the initial data and the second vni.

In this embodiment, as shown in fig. 9, after the step S105, the following steps may be included:

s106, the network card determines a corresponding sending tunnel according to the second data segment.

Wherein, the second encapsulated data in the second data segment may determine an IP address at both ends of a pair of tunnels and a source MAC address of the tunnel, but one end of different tunnels may correspond to the same source MAC address and IP address; further, the second vni in the second data segment may be identified by the VXLAN network to determine the sending tunnel.

And S107, the network card sends the second data segment according to the sending tunnel.

It is understood that the sending tunnel is a path for transmitting the second data segment. For example, when the terminal sends the first data segment to the network card, the first data segment is converted into the second vni, one end of the sending tunnel determined by the second data segment is the physical port of the network card, and the other end of the sending tunnel is the physical port of the bras, that is, the second data segment can be transmitted from the network card to the bras.

In this embodiment, as shown in fig. 10, the server further includes a third process, the first encapsulated data further includes a source IP address and a destination IP address, and the step S101 may include the following steps:

s401, the first process analyzes the first encapsulated data from the first data segment, obtains the source IP address and the destination IP address, and stores the source IP address and the destination IP address to the reserved segment cache region.

Specifically, as shown in fig. 5, the Outer IP header in the first data segment includes an IP SA603 and an IP DA604, where the IP SA is the source IP address, i.e., the IP address of the source VTEP of the VXLAN tunnel, and the IP DA is the destination IP address, i.e., the IP address of the destination VTEP of the VXLAN tunnel.

The first process may access the first data segment, parse the first encapsulated data in the first data segment, and obtain the Outer IP header in the first encapsulated data according to a parsing result to obtain the source IP address and the destination IP address therein. It can be understood that, at this time, the initial data is stored in the data segment buffer, and the source IP address and the destination IP address are stored in the reserved segment buffer. As can be seen from the above analysis, a preset interval may be reserved between the reserved segment buffer and the data segment buffer to properly distinguish the data segment from part of the information in the stored data segment, so that corresponding preset spaces may be reserved between the source IP address, the destination IP address and the initial data, so that when the source IP address and the destination IP address are obtained at a later stage, on the premise that the reserved segment buffer is determined according to the preset relative position of the first byte of the data segment buffer and the first byte of the reserved segment buffer, the present embodiment may further verify the source IP address and the destination IP address, for example, may further determine whether there is a preset interval between the "source IP address and the destination IP address determined through the above steps" and the data segment buffer, so as to judge whether the determined source IP address and destination IP address are real source IP address and destination IP address.

Similarly, after the first process finishes executing the step S401, the first process may send a relevant instruction such as a "decapsulation data segment task completion instruction" to the network card, so as to inform the network card that the first process has completed a relevant operation such as "decapsulation data segment", so that the network card performs a next operation. Meanwhile, the first process may obtain the pointer of the initial data at this time, and the first process may also send the pointer of the initial data to the network card.

S402, the third process extracts the source IP address and the destination IP address from the reserved segment cache region, and judges whether the first data segment is a legal data header according to the source IP address and the destination IP address.

Similarly, after receiving the pointer of the initial data, the network card may also send the pointer of the initial data to the third process, so that the third process can quickly locate the initial data according to the pointer of the initial data, thereby improving the working efficiency of the third process.

As can be seen from the above description, the source IP address and the destination IP address are the IP address of the source VTEP and the IP address of the destination VTEP of the VXLAN tunnel, respectively, that is, the third process can determine multiple VXLAN tunnels capable of transmitting the first data segment according to the source IP address and the destination IP address. Further, the third process may include a VXLAN tunnel table listing legal conditions of data segments passing through each VXLAN tunnel. Therefore, according to the VXLAN tunnel corresponding to the first data segment, the VXLAN tunnel table is searched, and whether the first data segment is legal or not can be judged.

And S403, if the first data segment is a legal data header, processing the initial data by the third process.

Specifically, when the first data segment is a legal data header, the third process may read the initial data, modify the initial data, and transmit the initial data.

S404, if the first data segment is not a legal data header, the third process discards the initial data.

Specifically, when the first data segment is not a legal data header, the third process may ignore the initial data and continue to perform operations related to other data segments, and the like.

The steps S401 to S404 may be executed before the step S101, that is, it may be determined whether the first data segment is a legal data header, and the step S101 is executed only when the first data segment is a legal data header, otherwise, the step S101 is not executed.

In this embodiment, as shown in fig. 11, the server further includes a fourth process, the network card further includes a mirror image cache region, the mirror image cache region includes a mirror image data segment cache region and a mirror image reserved segment cache region, the network card receives a first data segment and stores the first data segment to the data segment cache region, the first data segment includes first encapsulated data and initial data, and the step of the first encapsulated data including first vni may include the following steps after:

s501, the first process analyzes the first encapsulated data from the first data segment and obtains the first vni, and whether a user of the first data segment opens a virus detection service is judged according to the first vni.

The first process can access the first data segment, analyze the first encapsulated data in the first data segment, and obtain the first vni in the first encapsulated data according to an analysis result; further, the first process stores a plurality of vni and a plurality of mappings of results of "whether to open the virus detection service", that is, each vni corresponds to a result of "yes" or "no" opening the virus detection service. After the first process obtains the first vni, a corresponding result of "whether to open the virus detection service" can be found according to the mapping, so as to determine whether to open the virus detection service for the user of the first data segment.

S502, if the user of the first data segment opens the virus detection service, the first process stores the first data segment to the mirror image data segment cache region and stores the first vni to the mirror image reserved segment cache region.

Specifically, when the user of the first data segment opens the virus detection service, the first process may copy the first data segment first, and then store the copied first data segment in the mirror image data segment cache region; similarly, in step S501, after obtaining the first vni, the first process may temporarily store the first vni, and when the user of the first data segment opens the virus detection service, the first process may store the temporarily stored first vni in the mirror image reserved segment buffer.

Wherein, when the first data segment and the first vni are both saved in the mirror cache region, it may be understood that the first process mirrors the first data segment and the first vni, so that the first data segment and the first vni are also saved in the mirror cache region; thus, when multiple processes need to access the first data segment or the first vni at the same time, the first data segment or the first vni may be acquired from the cache region and the mirror image cache region, for example, after the network card receives the first data segment, if the embodiment of the present invention is adopted, the second process may perform the steps S103 to S104 after "the first data segment is saved to the mirror image data segment cache region and the first vni is saved to the mirror image reserved segment cache region", and at the same time, the fourth process may perform the step S503.

Therefore, the fourth process does not need to spend time waiting for the second process to execute the steps S103-S104 before executing the step S503, that is, the time for the second process to execute the steps S103-S104 and the time for the fourth process to execute the step S503 do not need to be accumulated, but are parallel to each other; furthermore, because the time for the second process to execute the steps S103-S104 and the time for the fourth process to execute the step S503 exist in parallel, rather than in series, the time between the network card receiving the first data segment and the network card sending the first data segment in the embodiment of the present invention may be reduced, thereby improving the efficiency of the server.

S503, the fourth process extracts the first data segment and the first vni from the mirror image cache region, and generates a corresponding virus detection report according to a rule of a virus detection service.

Among other things, the rules of the virus detection service can be understood as follows: the fourth process defines a plurality of virus types, the fourth process may extract and parse the first data segment, and if data of the data segment parsed from the first data segment hits at least one of the plurality of virus types, the fourth process may generate a virus detection report, where the virus detection report includes user information determined according to the first vni and a virus type corresponding to the user information.

In this embodiment, as shown in fig. 12, a schematic diagram of signaling interaction of a data information processing method in an embodiment of the present invention is shown, where the schematic diagram of signaling interaction of the data information processing method includes the following steps:

s1, the network card receives the first data segment and stores the first data segment to the data segment buffer area;

s2, the network card sends the pointer of the first data segment to the first process;

s3, the first process analyzes the first encapsulated data in the first data segment from the first data segment and obtains the first vni in the first encapsulated data;

s4, the first process stores the first vni to a reserved segment buffer area;

s5, the first process sends a 'decapsulation data segment task completion instruction' to the network card;

s6, the network card sends a pointer of initial data in the first data segment to the second process;

s7, the second process extracts the initial data from the data segment cache region and the first vni from the reserved segment cache region;

s8, the second process determines corresponding nat inner layer data header information and second vni according to the initial data and the first vni;

s9, the second process stores the nat inner layer data header information to the reserved segment buffer area;

s10, the second process saves the second vni to the reserved segment buffer area;

s11, the second process sends a 'task completion instruction for processing data segment information' to the network card;

s12, the network card sends a pointer of Payload in initial data in the first data segment to the first process;

s13, the first process extracts the second vni from the reserved segment buffer area, and determines second encapsulated data according to the second vni;

s14, the first process stores the second encapsulated data to the reserved segment buffer area, so that the initial data in the data segment buffer area, the nat inner layer data header information in the reserved segment buffer area and the second encapsulated data jointly form a second data segment;

s15, the first process sends a 'package data task completion instruction' to the network card.

In order to better implement the method for processing data information in the embodiment of the present invention, based on the method for processing data information, a server is further provided in the embodiment of the present invention, as shown in fig. 13, the server 400 includes a network card 401, a first process 402, and a second process 403, the network card 401 includes a buffer area, and the buffer area includes a data segment buffer area and a reserved segment buffer area;

the network card 401 is configured to receive a first data segment, and store the first data segment to the data segment cache area, where the first data segment includes first encapsulated data and initial data, and the first encapsulated data includes first vni;

the first process 402 is configured to parse the first encapsulated data from the first data segment, obtain the first vni, and store the first vni to the reserved segment buffer;

the second process 403 is configured to extract the initial data from the data segment buffer, extract the first vni from the reserved segment buffer, and determine corresponding nat inner layer data header information and second vni according to the initial data and the first vni;

the second process 403 is further configured to store the nat inner layer data header information and the second vni into the reserved segment cache region;

the first process 402 is further configured to extract the second vni from the reserved segment buffer, determine second encapsulated data according to the second vni, and store the second encapsulated data in the reserved segment buffer, so that the initial data located in the data segment buffer, the nat inner layer data header information located in the reserved segment buffer, and the second encapsulated data together form a second data segment.

In some embodiments of the present application, the second process 403 is further configured to extract an initial inner layer data header from the initial data, and extract the first vni from the reserved segment buffer; and

and the method is used for respectively searching corresponding nat IP and second vni in the nat conversion table and the vni conversion table according to the initial inner layer data header and the first vni.

In some embodiments of the present application, the network card 401 is further configured to send a pointer of the first data segment to the first process 402; and

the first process 402 is also configured to access the first data segment according to the pointer of the first data segment.

In some embodiments of the present application, the network card 401 is further configured to send a pointer of the initial data to the second process; and

the second process 403 is further configured to access the initial data and the first vni according to the pointer of the initial data.

In some embodiments of the present application, the network card 401 is further configured to determine a corresponding sending tunnel according to the second data segment; and

the network card 401 is further configured to send the second data segment according to the sending tunnel.

In some embodiments of the present application, as shown in fig. 14, the server further includes a third process 404, the first encapsulated data further includes a source IP address and a destination IP address, and the first process 402 is further configured to parse the first encapsulated data from the first data segment to obtain the source IP address and the destination IP address, and store the source IP address and the destination IP address in the reserved segment cache area; and

the third process 404 is specifically configured to:

extracting the source IP address and the destination IP address from the reserved segment cache region, and judging whether the first data segment is a legal data header according to the source IP address and the destination IP address;

if the first data segment is a legal data header, the third process processes the initial data;

and if the first data segment is not a legal data header, the third process discards the initial data.

In some embodiments of the present application, as shown in fig. 15, the server further includes a fourth process 405, the network card further includes a mirror image cache region, the mirror image cache region includes a mirror image data segment cache region and a mirror image reserved segment cache region, the first process 402 is further configured to analyze the first encapsulated data from the first data segment and obtain the first vni, and determine whether a user of the first data segment opens a virus detection service according to the first vni; and

if the user of the first data segment opens the virus detection service, the first process 402 is further configured to store the first data segment to the mirror image data segment cache region, and store the first vni to a mirror image reserved segment cache region;

the fourth process 405 is specifically configured to:

and extracting the first data segment and the first vni from the mirror image cache region, and generating a corresponding virus detection report according to a rule of virus detection service.

The invention provides a data information processing method and a server.A cache region in a network card comprises a data segment cache region and a reserved segment cache region, a first process analyzes a first vni from a first data segment and stores the first vni to the reserved segment cache region; the second process extracts the first vni from the reserved segment cache region, and determines corresponding nat inner layer data header information and second vni according to the first vni; the second process stores the nat inner layer data header information and the second vni to the reserved section cache region; and the first process extracts the second vni from the reserved segment buffer to perform corresponding operation. In the scheme, a reserved section cache region is opened in a cache region in a network card, and first vni, nat inner layer data header information and second vni which are obtained by processing a first process and a second process are temporarily stored in the reserved section cache region, so that the first process and the second process can share information obtained by processing each other, and the situation that operations executed by the first process and the second process are partially repeated is avoided; and when the first process and the second process acquire the corresponding information in the first data segment and execute the corresponding operation, the integrity of the first data segment is still ensured, so that other processes can normally acquire the information of the first data segment, and the server is prevented from acquiring the first data segment from the outside again to supply for other processes. To sum up, this scheme has improved the work efficiency of server.

An embodiment of the present invention further provides a server, as shown in fig. 16, which shows a schematic structural diagram of the server according to the embodiment of the present invention, specifically:

the server may include components such as a processor 801 of one or more processing cores, memory 802 of one or more computer-readable storage media, a power supply 803, and an input unit 804. Those skilled in the art will appreciate that the server architecture shown in FIG. 16 is not meant to be limiting, and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. Wherein:

the processor 801 is a control center of the server, connects various parts of the entire server using various interfaces and lines, and performs various functions of the server and processes data by running or executing software programs and/or modules stored in the memory 802 and calling data stored in the memory 802, thereby performing overall monitoring of the server. Alternatively, processor 801 may include one or more processing cores; the Processor 801 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, preferably the processor 801 may integrate an application processor, which handles primarily the operating system, user interfaces, application programs, etc., and a modem processor, which handles primarily wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 801.

The memory 802 may be used to store software programs and modules, and the processor 801 executes various functional applications and data processing by operating the software programs and modules stored in the memory 802. The memory 802 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data created according to the use of the server, and the like. Further, the memory 802 may include high speed random access memory and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 802 may also include a memory server to provide the processor 801 access to the memory 802.

The server further includes a power supply 803 for supplying power to the various components, and preferably, the power supply 803 may be logically connected to the processor 801 via a power management system, so that functions of managing charging, discharging, and power consumption are performed via the power management system. The power supply 803 may also include one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and any like components.

The server may further include an input unit 804, and the input unit 804 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control.

Although not shown, the server may further include a display unit and the like, which will not be described in detail herein. Specifically, in this embodiment, the processor 801 in the server loads an executable file corresponding to one or more processes of an application program into the memory 802 according to the following instructions, and the processor 801 runs the application program stored in the memory 802, so as to implement various functions, where the processor 801 may send an instruction to the network card, the first process, and the second process in the server, so that the network card, the first process, and the second process sequentially execute the following steps:

the network card receives a first data segment, and stores the first data segment to the data segment cache region, wherein the first data segment comprises first encapsulated data and initial data, and the first encapsulated data comprises first vni;

the first process analyzes the first encapsulated data from the first data segment, acquires the first vni, and stores the first vni to the reserved segment buffer area;

the second process extracts the initial data from the data segment cache region, extracts the first vni from the reserved segment cache region, and determines corresponding nat inner layer data header information and second vni according to the initial data and the first vni;

the second process stores the nat inner layer data header information and the second vni to the reserved section cache region;

and the first process extracts the second vni from the reserved segment cache region, determines second encapsulated data according to the second vni, and stores the second encapsulated data in the reserved segment cache region, so that the initial data in the data segment cache region, the nat inner layer data header information in the reserved segment cache region, and the second encapsulated data together form a second data segment.

It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor.

To this end, an embodiment of the present invention provides a computer-readable storage medium, which may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like. The system comprises a network card, a first process and a second process, wherein a computer program is stored on the network card, and is loaded by a processor to send instructions to the network card, the first process and the second process in a server, so that the network card, the first process and the second process sequentially execute the following steps:

the network card receives a first data segment, and stores the first data segment to the data segment cache region, wherein the first data segment comprises first encapsulated data and initial data, and the first encapsulated data comprises first vni;

the first process analyzes the first encapsulated data from the first data segment, acquires the first vni, and stores the first vni to the reserved segment buffer area;

the second process extracts the initial data from the data segment cache region, extracts the first vni from the reserved segment cache region, and determines corresponding nat inner layer data header information and second vni according to the initial data and the first vni;

the second process stores the nat inner layer data header information and the second vni to the reserved section cache region;

and the first process extracts the second vni from the reserved segment cache region, determines second encapsulated data according to the second vni, and stores the second encapsulated data in the reserved segment cache region, so that the initial data in the data segment cache region, the nat inner layer data header information in the reserved segment cache region, and the second encapsulated data together form a second data segment.

In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and parts that are not described in detail in a certain embodiment may refer to the above detailed descriptions of other embodiments, and are not described herein again.

In a specific implementation, each unit or structure may be implemented as an independent entity, or may be combined arbitrarily to be implemented as one or several entities, and the specific implementation of each unit or structure may refer to the foregoing method embodiment, which is not described herein again.

The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.

The method and the server for processing data information provided by the embodiment of the present invention are described in detail above, and a specific example is applied in the text to explain the principle and the embodiment of the present invention, and the description of the above embodiment is only used to help understand the method and the core idea of the present invention; meanwhile, for those skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

31页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种虚拟交换机的转发表管理方法、系统及装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!