Message processing method and device applied to data center, electronic equipment and medium

文档序号:1941443 发布日期:2021-12-07 浏览:3次 中文

阅读说明:本技术 应用于数据中心的报文处理方法和装置、电子设备和介质 (Message processing method and device applied to data center, electronic equipment and medium ) 是由 董玢 李力 李旭谦 于 2020-07-16 设计创作,主要内容包括:本公开提供了一种应用于数据中心的报文处理方法和报文处理装置,电子设备和计算机可读存储介质。其中,该方法包括:获取来自外网的入向报文,该入向报文包括目的地址;根据数据中心的公网路由表,确定内网中用于处理该入向报文的功能网关集群;以及向功能网关集群发送该入向报文,以便功能网关集群处理该入向报文。(The disclosure provides a message processing method and a message processing device applied to a data center, electronic equipment and a computer readable storage medium. Wherein, the method comprises the following steps: acquiring an incoming message from an external network, wherein the incoming message comprises a destination address; determining a functional gateway cluster in the intranet for processing the incoming message according to a public network routing table of the data center; and sending the incoming message to the functional gateway cluster so that the functional gateway cluster processes the incoming message.)

1. A message processing method applied to a data center comprises the following steps:

acquiring an incoming message from an external network, wherein the incoming message comprises a first destination address;

determining a first functional gateway cluster in the intranet for processing the incoming message according to the public network routing table and the first destination address of the data center; and

and sending the incoming message to the first functional gateway cluster so that the first functional gateway cluster processes the incoming message.

2. The method of claim 1, further comprising:

acquiring a first tunnel message from an intranet, and decapsulating the first tunnel message to obtain an decapsulated message, wherein the decapsulated message comprises a source address;

determining an outer network core switch used for processing the decapsulated message in an outer network according to a public network routing table of the data center; and

and sending the decapsulated message to the outer network core switch so that the outer network core switch processes the decapsulated message.

3. The method of claim 2, wherein the decapsulated message comprises a second destination address, the method further comprising:

receiving the decapsulated message from the outer network core switch under the condition that the second destination address is the address in the inner network;

determining a second functional gateway cluster in the intranet for processing the decapsulated message according to the public network routing table and the second destination address of the data center;

performing tunnel encapsulation on the decapsulated message to obtain a second tunnel message; and

and sending the second tunnel message to the second functional gateway cluster so that the second functional gateway cluster processes the second tunnel message.

4. The method of claim 1 or 2, further comprising:

and carrying out speed limit processing on the incoming message or the unpacked message according to a preset speed limit strategy.

5. The method of claim 1, wherein the first functional gateway cluster comprises a plurality of functional gateways, wherein the plurality of functional gateways are to implement a same public network service,

wherein the public network service comprises at least one of: the system comprises a load balancing service, a network address translation service and an elastic public network service.

6. A message processing device applied to a data center comprises:

the first acquisition module is used for acquiring an incoming message from an external network, wherein the incoming message comprises a first destination address;

a first determining module, configured to determine, according to a public network routing table and the first destination address of the data center, a first functional gateway cluster in an intranet for processing the inbound packet; and

a first sending module, configured to send the inbound packet to the first functional gateway cluster, so that the first functional gateway cluster processes the inbound packet.

7. The apparatus of claim 6, further comprising:

the second acquisition module is used for acquiring a first tunnel message from an intranet and decapsulating the first tunnel message to obtain an decapsulated message, wherein the decapsulated message comprises a source address;

a second determining module, configured to determine, according to the public network routing table of the data center, an external network core switch in an external network, where the external network core switch is used to process the decapsulated packet; and

and the second sending module is used for sending the decapsulated message to the outer network core switch so that the outer network core switch can process the decapsulated message.

8. The apparatus of claim 7, wherein the decapsulated message comprises a second destination address, the apparatus further comprising:

a receiving module, configured to receive the decapsulated packet from the outer network core switch when the second destination address is an address in the inner network;

a third determining module, configured to determine, according to the public network routing table and the second destination address of the data center, a second functional gateway cluster in the intranet for processing the decapsulated packet;

a third obtaining module, configured to perform tunnel encapsulation on the decapsulated packet to obtain a second tunnel packet; and

and a third sending module, configured to send the second tunnel packet to the second functional gateway cluster, so that the second functional gateway cluster processes the second tunnel packet.

9. An electronic device, comprising:

one or more processors;

storage means for storing one or more programs;

wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the method of any of claims 1-4.

10. A computer readable storage medium having stored thereon executable instructions which, when executed by a processor, cause the processor to perform the method of any one of claims 1 to 4.

Technical Field

The present disclosure relates to the field of cloud services and data processing, and in particular, to a message processing method and apparatus applied to a data center, an electronic device, and a computer-readable storage medium.

Background

With the rapid development of internet cloud technology, more and more people use cloud services. Due to the influences of user diversity, functional diversity and the like, a network topology in a data center of a current public cloud scene is slightly sluggish when facing a changing data environment, and a hardware or software environment with higher performance and more perfect information integration is needed to realize simplification of the network topology in the data center, normalization of data access of a user in practical cloud service and the like.

In the process of implementing the concept disclosed by the present disclosure, the inventor finds that in the related art, at least the following problems exist, the public network ip allocated by the user is the ip allocated by each gateway according to the functional characteristics of the gateway, and the problems that the route traction in the machine room is complex and cannot be concentrated, even the ip conflicts and the like are caused because each gateway only draws the ip traffic concerned by each gateway.

Disclosure of Invention

In view of the above, the present disclosure provides a message processing method and apparatus applied to a data center, an electronic device, and a computer-readable storage medium.

One aspect of the present disclosure provides a message processing method applied to a data center, including: acquiring an incoming message from an external network, wherein the incoming message comprises a first destination address; determining a first functional gateway cluster in the intranet for processing the incoming message according to the public network routing table and the first destination address of the data center; and sending the incoming message to the first functional gateway cluster so that the first functional gateway cluster processes the incoming message.

According to an embodiment of the disclosure, the method further comprises: acquiring a first tunnel message from an intranet, and decapsulating the first tunnel message to obtain an decapsulated message, wherein the decapsulated message comprises a source address; determining an outer network core switch used for processing the decapsulated message in an outer network according to a public network routing table of the data center; and sending the decapsulated message to the outer network core switch so that the outer network core switch processes the decapsulated message.

According to another embodiment of the disclosure, the method further comprises: receiving the decapsulated message from the outer network core switch under the condition that the second destination address is the address in the inner network; determining a second functional gateway cluster in the intranet for processing the decapsulated message according to the public network routing table and the second destination address of the data center; performing tunnel encapsulation on the decapsulated message to obtain a second tunnel message; and sending the second tunnel message to the second functional gateway cluster so that the second functional gateway cluster processes the second tunnel message.

According to a further embodiment of the disclosure, the method further comprises: and carrying out speed limit processing on the incoming message or the unpacked message according to a preset speed limit strategy.

Optionally, the first functional gateway cluster includes a plurality of functional gateways, where the plurality of functional gateways are configured to implement the same public network service, and the public network service includes at least one of the following: the system comprises a load balancing service, a network address translation service and an elastic public network service.

Another aspect of the present disclosure provides a message processing apparatus applied to a data center, including: the first acquisition module is used for acquiring an incoming message from an external network, wherein the incoming message comprises a first destination address; a first determining module, configured to determine, according to a public network routing table and the first destination address of the data center, a first functional gateway cluster in an intranet for processing the inbound packet; and a first sending module, configured to send the incoming packet to the first functional gateway cluster, so that the first functional gateway cluster processes the incoming packet.

According to an embodiment of the present disclosure, the apparatus further comprises: the second acquisition module is used for acquiring a first tunnel message from an intranet and decapsulating the first tunnel message to obtain an decapsulated message, wherein the decapsulated message comprises a source address; a second determining module, configured to determine, according to the public network routing table of the data center, an external network core switch in an external network, where the external network core switch is used to process the decapsulated packet; and the second sending module is used for sending the decapsulated message to the outer network core switch so that the outer network core switch can process the decapsulated message.

According to another embodiment of the present disclosure, the apparatus further comprises: a receiving module, configured to receive the decapsulated packet from the outer network core switch when the second destination address is an address in the inner network; a third determining module, configured to determine, according to the public network routing table and the second destination address of the data center, a second functional gateway cluster in the intranet for processing the decapsulated packet; a third obtaining module, configured to perform tunnel encapsulation on the decapsulated packet to obtain a second tunnel packet; and a third sending module, configured to send the second tunnel packet to the second functional gateway cluster, so that the second functional gateway cluster processes the second tunnel packet.

Another aspect of the present disclosure provides an electronic device including: one or more processors; storage means for storing one or more programs; wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the message processing method applied to the data center described above.

Another aspect of the present disclosure provides a computer-readable storage medium having stored thereon executable instructions, which when executed by a processor, cause the processor to perform the above-mentioned message processing method applied to a data center.

Another aspect of the present disclosure provides a computer program comprising computer executable instructions for the above-mentioned message processing method applied to a data center when executed.

According to the embodiment of the disclosure, the method comprises the steps of acquiring an incoming message from an external network, wherein the incoming message comprises a first destination address; determining a first functional gateway cluster in the intranet for processing the incoming message according to the public network routing table and the first destination address of the data center; and sending the incoming message to the first functional gateway cluster so that the first functional gateway cluster processes the incoming message, wherein the public network routing table of the data center uniformly publishes the public network ip and the functions thereof, so that the technical problems of complex routing, ip conflict and the like when the gateway distributes the ip are at least partially overcome, and the technical effects of simplifying routing publishing of the data center, more flexible network topology in the data center and the like are further achieved.

Drawings

The above and other objects, features and advantages of the present disclosure will become more apparent from the following description of embodiments of the present disclosure with reference to the accompanying drawings, in which:

fig. 1 schematically shows a system architecture of a message processing method and apparatus applied to a data center according to an embodiment of the present disclosure;

FIG. 2 schematically illustrates a schematic diagram of a tofino chip within a barefoot switch;

fig. 3 schematically shows a flowchart of a message processing method applied to a data center according to an embodiment of the present disclosure;

FIG. 4 schematically illustrates a flow chart of an inbound message handling method applied to a data center according to an embodiment of the present disclosure;

FIG. 5 schematically illustrates a flow chart of an outbound message processing method applied to a data center according to an embodiment of the present disclosure;

fig. 6 schematically shows an overall flowchart of a method for ingress and egress message processing when a data center internally accesses a public network ip according to an embodiment of the present disclosure;

fig. 7 schematically shows a split structure diagram of a functional gateway cluster in a message processing method applied to a data center according to an embodiment of the present disclosure;

fig. 8 is a block diagram schematically illustrating a structure of an incoming packet processing apparatus applied to a data center according to an embodiment of the present disclosure;

fig. 9 schematically shows a block diagram of an electronic device suitable for implementing a message processing method applied to a data center according to an embodiment of the present disclosure.

Detailed Description

Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is illustrative only and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure.

The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.

All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It is noted that the terms used herein should be interpreted as having a meaning that is consistent with the context of this specification and should not be interpreted in an idealized or overly formal sense.

Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B and C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). Where a convention analogous to "A, B or at least one of C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B or C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).

The public network ip on the public cloud adopts an elastic public network ip, namely, the public network service to be bound can be elastically selected through one public network ip, and frequent change of the public network ip address is avoided. In a data center of a public cloud scenario, gateway servers are divided into separate gateway clusters according to functions, and each cluster bears one or more public network related services including eip (elastic ip address), slb (server load balancing service) and nat (network address translation service). In the process of realizing the concept of the present disclosure, the inventor finds that due to the elasticity of the public cloud public network ip, the fixed public network ip network segment cannot be corresponded to the gateway service cluster providing specific services during deployment, so that the routing cannot be routed according to the whole network segment. Therefore, the embodiment of the disclosure provides a method and a device for realizing uniform public network ip network segment routing release of a data center, so as to simplify the routing release of the data center. Meanwhile, due to the unified publishing of the public network ip network segment, the lower layer gateway cluster does not need to publish the public network ip network segment independently, so that the routing conflict of the data center can not be caused, the structure of the gateway cluster does not need to be considered, the rear end gateway cluster can be located at any accessible node of an underlay intranet in a machine room, the underlay refers to the current data center network foundation forwarding structure, the underlay intranet refers to the network of a physical foundation layer, in addition, the gateway cluster can be split into different clusters according to the public network service function, and each function can be further split into a plurality of clusters according to clients, so that the lower layer gateway cluster has more flexible capacity expansion capability while being concentrated in respective service.

The inventor also finds that even though an x86 server is deployed in front of each public network service gateway cluster, the ip segment of the public network is routed and distributed through a dpdk (intel-sourced high-performance network forwarding suite), but the method is limited by the processing capability of a cpu processor in the x86 server, the data center outlet traffic is large, and the throughput capacity of the whole cluster is often improved through a capacity expansion server, but the x86 server has a high cost per unit, and the capacity expansion server only causes a large increase in the server cost, and even though the capacity expansion server always depends on distribution to each server according to the flow, so that on each cpu of the server, when an attack or a single flow is too large, the problem of service interruption caused by the fact that the cpu may be exhausted can not be solved, and at the same time, the method is limited by the cpu of the server itself, and the server cannot meet the requirement of line speed forwarding of over hundred G.

Therefore, the embodiments of the present disclosure also use a programmable switch that has tens of hundreds of G network ports, has a forwarding capability up to T level, does not significantly reduce processing performance due to packet length, has a lower price cost, and can inherit more network functions without reducing processing capability.

The embodiment of the disclosure provides a message processing method and device applied to a data center. The method comprises the steps of obtaining an incoming message from an external network, wherein the incoming message comprises a first destination address; determining a first functional gateway cluster in the intranet for processing the incoming message according to a public network routing table and a first destination address of the data center; and sending an incoming message to the first functional gateway cluster so that the first functional gateway cluster processes the incoming message.

Fig. 1 schematically shows a system architecture of a message processing method and apparatus applied to a data center according to an embodiment of the present disclosure. It should be noted that fig. 1 is only an example of a system architecture to which the embodiments of the present disclosure may be applied to help those skilled in the art understand the technical content of the present disclosure, and does not mean that the embodiments of the present disclosure may not be applied to other devices, systems, environments or scenarios.

As shown in fig. 1, the system architecture according to this embodiment may include an extranet portion and an intranet portion connected by a distributor-built-in device, the intranet portion including user terminals, gateway clusters, and corresponding switches, the extranet portion mainly including internet terminals. The device with the built-in distributor may be, for example, a server or a switch. The OVS (virtual switch) realizes the isolation among users on the cloud, different users are connected to the switch serving the users, and are finally connected to an internal network core switch, and because the internal network users cannot directly access an external network, the internal network part also comprises a gateway server which is realized in a gateway cluster mode, the gateway cluster comprises but is not limited to EIP-GW (gateway for realizing elastic public network service), LB-GW (gateway for realizing load balancing service) and NAT-GW (gateway for realizing network address conversion service) shown in the figure, each gateway cluster is connected to the switch serving the users, and is finally connected to the internal network core switch, so as to realize the purpose of converting the internal network of the user side into the public network ip; and the data information of the internet terminal is forwarded in a targeted manner through the outer network core switch. The internal network core switch and the external network core switch are connected through the distributor to realize the connection of the internal network and the external network, so that users in the internal network can access the external internet through the converted public network ip. Wherein the connections between the structures are made may include various connection types, such as wired and/or wireless communication links, and the like.

It should be noted that the message processing method applied to the data center provided by the embodiment of the present disclosure may be generally executed by a distributor. Accordingly, the message processing apparatus applied to the data center provided by the embodiment of the present disclosure may be generally disposed at a distributor location. The message processing method applied to the data center provided by the embodiment of the disclosure can also be implemented by a software and hardware parameter structure which is different from the distributor but similar to the distributor, or a functional module with the distributor built in. Accordingly, the message processing device applied to the data center provided by the embodiment of the disclosure may also be disposed at a location different from the location of the distributor but having a low information error rate in the actual communication process with the distributor, or may be connected to other architectures to achieve a low information error rate. Or, the message processing method applied to the data center provided by the embodiment of the present disclosure may implement communication between an external network and an internal network between different terminals, or may also implement communication between an external network and an external network between different terminals, or may also implement communication between an internal network and an internal network between different terminals. Accordingly, the message processing apparatus applied to the data center provided in the embodiment of the present disclosure may also be disposed between the outer network core switch and the inner network core switch, or may also be disposed between the outer network core switch and the outer network core switch, or may also be disposed between the inner network core switch and the inner network core switch.

It should be understood that the number of terminals, networks, other serving devices, etc. in fig. 1 is merely illustrative. There may be any number of terminals, networks, and other serving devices, as desired for an implementation.

Fig. 2 schematically shows a schematic diagram of a chip for implementing a distributor function inside a white box switch, which implements the function of a distributor inside a system architecture of the message processing method and apparatus applied to a data center according to the embodiment of the present disclosure. It should be noted that fig. 2 is only an example of a chip structure for realizing the distributor function to which the embodiments of the present disclosure can be applied to help those skilled in the art understand the technical content of the present disclosure, but it does not mean that the embodiments of the present disclosure may not be changed in other structures.

As shown in fig. 2, the chip structure according to this embodiment may include a parser parse packet and a subsequent storage logic unit, which includes a memory and an arithmetic logic unit. And performing primary analysis on the data information through a parser packet, and storing the functional characteristics corresponding to each data information in a subsequent storage logic unit in a main body-function table structure form so as to enter the next operation. The chip structure may not be limited to that shown in fig. 2, and for example, a plurality of connected hardware or software modules with parsing function and hardware or software modules with storage and execution function may be connected in a manner that also includes wired and/or wireless communication links and the like.

The chip for implementing the dispenser function may be, for example, another programmable chip or the like. It is necessary that the programmable chip, regardless of the combination or single structure, includes a parsing header unit and a storage logic unit to determine and store the function table structure.

The user can forward and process data through the chip, and the data can be sent to, namely, the internet terminal and the device of the user terminal can be selected from various electronic devices which have display screens and support web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers and the like.

The Internet terminal and the intranet user terminal can be used as servers, the servers can be servers for providing various services, and the chip with the distributor function is used for realizing information interaction and transmission among the servers.

It should be understood that the number of parsing header units and storage logic units, etc. in fig. 2 is merely illustrative. There may be any number of parsing header units and storage logic units, as desired for implementation.

Fig. 3 schematically shows a flowchart of a message processing method applied to a data center according to an embodiment of the present disclosure, including an incoming message processing flow, an outgoing message processing flow, and an incoming and outgoing message overall processing flow when a public network ip is accessed inside the data center. Each of these methods is described in detail below.

It should be noted that the distributor is called a programmable traffic distributor, and it first stores the public network ip and the functions that it belongs to issued by the upper api interface, and completes the data distribution processing according to it. The upper-layer api means a client program or software such as a webpage and an app, the issuing of the public network ip by the upper-layer api means that a user purchases the public network ip through the client such as the webpage and the app, each public network ip correspondingly has an affiliated function, and the affiliated function includes a destination ip corresponding to the public network ip, a tunnel packaging format and/or controller configuration and the like. The public network ip and the functions thereof are configured to obtain a public network routing table, which is called a table in this embodiment, and then the public network ip is issued to the bottom layer software (corresponding controller) through the upper layer api for storage according to the controller configuration corresponding to the public network ip in the table. The data distribution processing includes realizing the data stream processing of the ingress and egress in the distributor based on the table.

In the above way, the public network ip obtained from different channels is stored in one table, so that the function of uniformly publishing the public network ip by the data center is realized, the problems that the ip conflict can be caused by the complicated route when the gateway distributes the ip can be solved, the route publishing of the data center is simplified, and the network topology in the data center is more flexible. The data center can be used for users to purchase all web pages or software and other clients of the public network ip.

In practical applications, the distributor needs to be built in the server or the switch. As an optional embodiment, a white box switch is selected, a parser packet header of a parser is developed and designed on hardware (i.e., a programmable distributor) of the white box switch, the table structure of match-action is set, and the table is sent to a storage logic unit of a distributor chip, please refer to fig. 1. An x86 chip and a linux operating system are loaded in the switch, and a controller and dynamic routing software are deployed on the switch. And receiving the public network ip and the function thereof issued by the upper-layer api through the controller, and constructing and issuing the configuration to the table of the switch chip. The routing software can release a dynamic routing protocol, the uplink port is connected to the uplink port of the distributor to provide incoming messages from an external network for the distributor, and the downlink port is connected to the downlink port of the distributor to provide outgoing messages from an internal network for the distributor and to provide drainage for a tunnel ip from the internal network.

In the above, the programmable switch is used to complete work in cooperation with the distributor, the programmable switch has tens of hundreds of G network ports, has the forwarding capability up to T level, and cannot obviously reduce the processing performance due to the long data packet; the price cost is lower; more network functions can be inherited without reducing processing power.

It should be noted that the white box switch may also be selected as another type of switch, and may be replaced with an x86 server or another server, but the server architecture cannot satisfy the T-class bandwidth and the small packet line speed processing capability, and is high in cost, so that the advantage of the programmable switch cannot be realized when the server is used.

Fig. 4 schematically shows a flowchart of an inbound message processing method applied to a data center according to an embodiment of the present disclosure.

As shown in fig. 4, the method includes operations S101 to S103.

In operation S101, an incoming packet from an external network is obtained, where the incoming packet includes a first destination address;

in operation S102, a first functional gateway cluster in the intranet for processing the inbound packet is determined according to the public network routing table and the first destination address of the data center.

In operation S103, the ingress packet is sent to the first functional gateway cluster, so that the first functional gateway cluster processes the ingress packet.

According to the embodiment of the disclosure, before the incoming packet is sent to the first functional gateway cluster, the incoming packet may be subjected to speed-limiting processing according to a preset speed-limiting policy. According to an embodiment of the present disclosure, the intranet may be connected to the extranet, but the access range is smaller than that of the extranet, and the extranet may be regarded as a wide area network and the intranet as a local area network. The intranet cannot directly access the extranet, and a public network ip needs to be obtained through the gateway cluster, so that the communication between the intranet and the extranet is realized. The incoming message refers to an interaction unit during network communication, and contains complete interaction information, a source ip, a destination ip and other items. The first destination address is the destination ip address in the incoming message. The public network routing table is the table mentioned above. The first functional gateway cluster is a functional gateway cluster of a public network ip capable of providing communication between an external network and an internal network. The first functional gateway cluster processes the incoming message, namely, the message information is forwarded to the destination terminal of the intranet.

The method shown in fig. 4 will be further described with reference to the incoming packet processing flow in fig. 1 and fig. 3, and in conjunction with specific embodiments.

As shown in fig. 1, in this architecture, the internet belongs to an external network, and is connected to an external network core switch, and the OVS and each functional gateway cluster belong to an internal network and are connected to an internal network core switch, and the gateway cluster is used as a public network ip through which the internal network OVS can interact with the external network. The distributor is provided with a table capable of realizing data flow guiding and distribution, and all public network segments issued by the data center and the functions of the public network segments are stored in the table.

As shown in the ingress packet processing flow in fig. 3, when a new message (i.e., an ingress packet) passes through the distributor in the internet network without performing speed-limiting processing on the ingress packet, as shown in fig. 3, a destination ip address in the ingress packet is matched with a public network ip network segment in a table of the distributor, and if the matching is unsuccessful, the packet is directly forwarded to a default other server or switch; and if the matching is successful, packaging the message to obtain a tunnel message, then forwarding the tunnel message to a corresponding destination gateway cluster according to the destination ip address in the incoming message, and finally forwarding the tunnel message to a destination terminal.

Under the condition that the incoming message is subjected to speed-limiting processing according to a preset speed-limiting strategy, when the matching result of the destination ip address in the incoming message and the public network ip network segment in the table of the distributor is successful, the message is subjected to speed-limiting processing according to the preset speed-limiting strategy, then the message is packaged to obtain a tunnel message, and the tunnel message is forwarded to a corresponding destination gateway cluster and finally forwarded to a destination terminal.

It should be noted that the speed-limiting policy includes, but is not limited to, setting a speed-limiting condition for a specific public network ip itself, or setting a certain public network ip to exceed a set flow rate and then performing a corresponding speed-limiting operation according to a speed-limiting condition.

Through the implementation mode, the communication of the outer network can be finally transferred to the destination terminal of the inner network through the public network ip, and based on the functional characteristics of the public network ip during distribution, whether the destination ip address of the message exists in the range of the network can be directly determined in a unified table, namely whether the message belongs to the user of the network, and the message does not need to pass through a gateway cluster, so that whether the speed-limiting processing is implemented on the user can be conveniently determined, and the flexibility of splitting the gateway cluster is increased while the flow is saved.

Fig. 5 schematically shows a flowchart of an outbound message processing method applied to a data center according to an embodiment of the present disclosure.

As shown in fig. 5, the method includes operations S201 to S204.

In operation S201, a first tunnel message from an intranet is obtained, and is decapsulated to obtain an decapsulated message, where the decapsulated message includes a source address;

in operation S202, according to the public network routing table of the data center, an external network core switch in the external network for processing the decapsulated packet is determined.

In operation S203, the decapsulated message is subjected to rate limiting processing according to a preset rate limiting policy.

According to the embodiment of the present disclosure, the operation S204 may also be directly performed without performing the speed-limiting processing on the decapsulated message.

In operation S204, the decapsulated message is sent to an outer network core switch, so that the outer network core switch processes the decapsulated message.

According to the embodiment of the present disclosure, the first tunnel packet refers to a data packet after tunnel encapsulation, the tunnel encapsulation may include different encapsulation formats to distinguish an attribution object of a corresponding data packet, and an original data packet (i.e., a decapsulated packet) is obtained after decapsulating the tunnel packet, where the data packet includes complete interaction information, a source ip, a destination ip, and other items. The source address is the source ip of the decapsulated message. The outer network core switch is a switch through which an accessed outer network passes, and the outer network core switch to be selected is determined through the original ip and the table. The message is the destination terminal which transfers the message information to the external network after the external network core switch processes the decapsulation.

The method shown in fig. 5 will be further described with reference to the outbound message processing flow on the left side of fig. 1 and fig. 3, in conjunction with a specific embodiment.

As shown in fig. 1, when a user in an intranet needs to access an internet external network through a public network ip, the user realizes the conversion from the intranet ip to the public network ip through a functional gateway cluster connected in the same intranet, and then realizes the access with the external internet network through the public network ip. The public network ip is known to be uniformly distributed to the data center, so that the public network ip obtained by the user in the intranet within the range of the data center necessarily exists in the table. When a user sends a request message (i.e., an outbound message) to pass through the distributor, the outbound message is not necessarily a message in the scope under the control of the server due to the diversity of the user, and the message transmitted to the server is encapsulated into a tunnel message in a certain format according to the embodiment of fig. 4.

As shown in the outgoing message processing flow on the left side of fig. 3, in the case that operation S203 is not required, when the tunnel messages are not matched, which indicates that the messages are not related to the jurisdiction of the server, the messages are directly sent to other default servers or switches; when the tunnel messages are matched, the tunnel is unsealed for the outgoing message to obtain an unsealed message; and then forwarding the tunnel message to a corresponding external network core switch according to the source ip address in the decapsulated message, and finally reaching a destination address to be accessed.

Under the condition that operation S203 is needed, when the tunnel messages are matched, the tunnel is decapsulated for the outgoing message, and a decapsulated message is obtained; matching the source ip address in the decapsulated message with the public network ip network segment in the table of the distributor table; when the matching is successful, carrying out speed limit processing on the message according to a preset speed limit strategy, and then forwarding the message to a corresponding outer network core switch; and when the matching is unsuccessful, directly forwarding the message to a corresponding external network core switch, and finally reaching the destination address to be accessed.

Through the implementation mode, the user of the internal network can finally access the destination address of the external network through the public network ip, and based on the functional characteristics of the public network ip in distribution, whether the source ip address of the message exists in the jurisdiction range of the data center can be directly determined in a unified table, namely whether the message comes from the user of the network, and the gateway cluster is not needed, so that whether the user needs speed-limiting processing can be conveniently judged, and the flexibility of splitting the gateway cluster is increased while the process is saved.

Fig. 6 schematically shows an overall flowchart of a method for ingress and egress message processing when accessing a public network ip inside a data center according to an embodiment of the present disclosure.

As shown in fig. 6, the method includes operations S301 to S309.

In operation S301, a first tunnel message from an intranet is obtained, and is decapsulated to obtain an decapsulated message, where the decapsulated message includes a source address and a second destination address.

In operation S302, an external network core switch in the external network for processing the decapsulated packet is determined according to the public network routing table of the data center.

In operation S303, the decapsulated message is subjected to rate-limiting processing according to a preset rate-limiting policy.

In operation S304, the decapsulated message is sent to an outer network core switch, so that the outer network core switch processes the decapsulated message.

In operation S305, in a case that the second destination address is an address in the intranet, receiving the decapsulated packet from the outer network core switch again;

in operation S306, a second functional gateway cluster in the intranet for processing the decapsulated packet is determined according to the public network routing table and the second destination address of the data center.

In operation S307, the decapsulated message is subjected to rate-limiting processing according to a preset rate-limiting policy.

In operation S308, the decapsulated packet is tunnel-encapsulated to obtain a second tunnel packet.

In operation S309, the second tunnel packet is sent to the second functional gateway cluster, so that the second functional gateway cluster processes the second tunnel packet.

According to an embodiment of the present disclosure, the method for processing ingress and egress packets when accessing the public network ip inside the data center may not include operations S303 and S307.

The second destination address refers to a destination ip of the decapsulated message, and the second destination address is an address in the intranet, that is, the address indicates that a final destination to which the message is forwarded belongs to a user in the intranet under the control of the data center. The second functional gateway cluster is a functional gateway cluster of the public network ip which can provide communication between the original intranet to which the message belongs and the target intranet to which the message is forwarded. The second tunnel message is a tunnel message obtained by encapsulating the decapsulated message again.

The method shown in fig. 6 will be further described with reference to the overall flow of ingress and egress message processing in fig. 1 and fig. 3, in conjunction with a specific embodiment.

As shown in fig. 1, when there is a public network ip accessed inside a data center, that is, when two terminals in the same or different intranet networks under the jurisdiction of the same data center are mutually protected, mutual communication between different intranet terminals is still realized through the public network ip, and therefore, it is still necessary to realize conversion from the intranet ip of different intranet terminals to the public network ip through a functional gateway cluster. Because the two intranet terminals are administered by the same data center and the public network ip is issued by the data center in a unified manner, the public network ip correspondingly obtained by the two intranet terminals is in the table. The request information (i.e. outgoing message) sent by the user belongs to the message in the data center range, and the request information itself belongs to the tunnel message, so that the judgment on whether the request information is the tunnel message is not needed.

As shown in the overall ingress and egress packet processing flow of fig. 3, without operations S303 and S307, the tunnel packet (i.e., the egress packet) is decapsulated, and a decapsulated packet is obtained; after the source ip address in the decapsulated message is matched with the public network ip network segment in the dispatcher table, forwarding the decapsulated message to a corresponding external network core switch and further processing the decapsulated message; in the processing process, if the destination ip address in the decapsulated message is found to be a public network ip network segment issued by the data center, the decapsulated message (as an incoming message) is pulled back to the distributor again; after the destination ip address in the decapsulated message is successfully matched with the public network ip network segment in the table of the distributor table, encapsulating the decapsulated message to obtain a tunnel message; and then forwarding the tunnel message to a corresponding destination gateway cluster, and finally forwarding the tunnel message to a destination terminal.

Under the condition that S303 and S307 need to be operated, firstly, after the source ip address in the decapsulated message is matched with the public network ip network segment in the distributor table, the message is subjected to speed limiting processing according to a preset speed limiting strategy, and then the message is forwarded to a corresponding outer network core switch for further processing; secondly, after the destination ip address in the decapsulated message is successfully matched with the public network ip network segment in the table of the distributor, the message is subjected to speed limit processing according to a preset speed limit strategy, then the message is encapsulated into a tunnel message, and the tunnel message is transferred to a paired gateway cluster and finally transferred to a destination terminal.

Through the implementation mode, two terminals in the same or different intranet networks in the same data center can access each other through the public network ip, and based on the functional characteristics of the public network ip in distribution, whether the source ip address and the destination ip address of the message exist in the jurisdiction range of the data center can be directly determined in a unified table, namely whether the incoming direction and the outgoing direction of the message belong to the network, and a functional gateway cluster is not needed, so that the problems that whether the user needs speed-limiting processing or not and routing between gateways is complicated when the incoming direction and the outgoing direction of the message belong to different functional gateway clusters can be conveniently judged, and the flexibility of splitting of the gateway cluster is increased while the flow is saved.

Fig. 7 schematically shows a split structure diagram of a functional gateway cluster in a message processing method applied to a data center according to an embodiment of the present disclosure.

Based on the specific embodiments shown in fig. 4 to fig. 6, the first functional gateway cluster includes a plurality of functional gateways, where the plurality of functional gateways are configured to implement the same public network service, and the public network service includes at least one of the following: the system comprises a load balancing service, a network address translation service and an elastic public network service.

The method shown in fig. 7 is further described with reference to the split structure diagrams of the functional gateway cluster in fig. 1 and fig. 7 in combination with specific embodiments.

As shown in fig. 7, the functional gateway cluster includes, but is not limited to, one or more of an EIP-GW (gateway implementing flexible public network service), an LB-GW (gateway implementing load balancing service), and an NAT-GW (gateway implementing network address translation service). For example, the duplication balancing or network address translation may be implemented by multiple EIP-GWs, multiple LB-GWs, multiple NAT-GWs, or a combination of any two or more of the EIP-GW, LB-GW, and NAT-GW, such as 401 implementing network address translation in the same or different NAT clusters by multiple NAT-GWs, such as 402 including an EIP cluster and an LB cluster, and the EIP cluster including multiple EIP-GWs, and the LB cluster including multiple LB-GWs, a common combination of which may implement both the elastic public network and the load balancing service, and such as 403 implementing both the elastic public network, the load balancing, and the network address translation service by a combination of the EIP-GW, the LB-GW, and the NAT-GW, wherein the network address translation service may implement network address translation in different gateway clusters. For example, subdividing a functional gateway cluster obtained by load balancing or network address translation splitting into more sub-clusters according to clients includes, as shown in 404: further splitting the EIP gateway cluster capable of realizing the elastic public network service into a gateway sub-cluster meeting the functions required by the user 1 and a gateway sub-cluster meeting the functions required by the user 2 according to the users; or the LB gateway cluster capable of realizing the load balancing service is further divided into a gateway sub-cluster conforming to the functions required by the user 1 and a gateway sub-cluster conforming to the functions required by the user 3 according to the users; or the gateway cluster composed of EIP and LB which can realize the elastic public network service and the load balance service at the same time is further divided into a gateway sub-cluster which accords with the functions required by the user 1, a gateway sub-cluster which accords with the functions required by the user 2 and a gateway sub-cluster which accords with the functions required by the user 3 according to the users; or, when the user 2 needs the network address translation service at the same time, it can be directly set as the sub-cluster of the NAT gateway cluster; or the like.

In the above embodiment, if the public network ip is distributed by the functional gateway cluster, the above splitting manner is difficult to implement or even impossible to implement, but in the embodiment of the present disclosure, the public network ip and the functions to which the public network ip belongs are clearly defined in the table, and the definition is flexibly changed according to the selection or purchase intention of the user, and only the corresponding function in the table is changed when the definition is changed, so that the method is not affected by the functional gateway cluster architecture, that is, flexible application of the flexible public network can be implemented no matter how the gateway cluster is changed. By the implementation mode of the disclosure, when the message in the data center is processed, the functional gateway cluster can be expanded without limit, or the structure of the functional gateway cluster can be randomly changed according to the requirement, so that the lower functional gateway cluster has more flexible capacity expansion capability while being concentrated on respective service.

Fig. 8 is a block diagram schematically illustrating a structure of an inbound message processing device applied to a data center according to an embodiment of the present disclosure.

As shown in fig. 8, the inbound message processing apparatus 500 includes a first obtaining module 501, a first determining module 502, and a first sending module 503.

A first obtaining module 501, configured to obtain an incoming packet from an external network, where the incoming packet includes a first destination address.

A first determining module 502, configured to determine, according to the public network routing table and the first destination address of the data center, a first functional gateway cluster in the intranet for processing the inbound packet.

A first sending module 503, configured to send the incoming packet to the first functional gateway cluster, so that the first functional gateway cluster processes the incoming packet.

The inbound message processing apparatus 500 may further include a second obtaining module, a second determining module, and a second sending module.

And the second acquisition module is used for acquiring the first tunnel message from the intranet and decapsulating the first tunnel message to obtain an decapsulated message, wherein the decapsulated message comprises a source address.

And the second determining module is used for determining an external network core switch used for processing the decapsulated message in the external network according to the public network routing table of the data center.

And the second sending module is used for sending the decapsulated message to an external network core switch so that the external network core switch processes the decapsulated message.

The inbound message processing apparatus 500 may further include a receiving module, a third determining module, a third obtaining module, and a third sending module.

And the receiving module is used for receiving the decapsulated message from the outer network core switch under the condition that the second destination address is an address in an inner network.

And a third determining module, configured to determine, according to the public network routing table and the second destination address of the data center, a second functional gateway cluster in the intranet for processing the decapsulated packet.

And the third obtaining module is used for performing tunnel encapsulation on the decapsulated message to obtain a second tunnel message.

A third sending module, configured to send the second tunnel packet to the second functional gateway cluster, so that the second functional gateway cluster processes the second tunnel packet.

Any number of the modules or units according to embodiments of the present disclosure, or at least part of the functionality of any number thereof, may be implemented in one module. Any one or more of the modules or units according to the embodiments of the present disclosure may be implemented by being split into a plurality of modules. Any one or more of the modules or units according to the embodiments of the present disclosure may be implemented at least partially as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in any other reasonable manner of hardware or firmware by integrating or packaging the circuits, or in any one of three implementations of software, hardware and firmware, or in any suitable combination of any of them. Alternatively, one or more of the modules or units according to embodiments of the present disclosure may be implemented at least partly as computer program modules, which, when executed, may perform corresponding functions.

For example, any plurality of the first obtaining module 501, the first determining module 502 and the first sending module 503 may be combined and implemented in one module, or any one of them may be split into a plurality of modules. Alternatively, at least part of the functionality of one or more of these modules may be combined with at least part of the functionality of the other modules and implemented in one module. According to an embodiment of the present disclosure, at least one of the first obtaining module 501, the first determining module 502, and the first sending module 503 may be implemented at least partially as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented by hardware or firmware in any other reasonable manner of integrating or packaging a circuit, or implemented by any one of three implementations of software, hardware, and firmware, or implemented by a suitable combination of any of them. Alternatively, at least one of the first obtaining module 501, the first determining module 502, the first sending module 503 may be at least partly implemented as a computer program module, which when executed may perform a corresponding function.

For another example, any multiple of the parsing header and the storage logic unit in the distributor shown in fig. 2 may be combined and implemented in one unit, or any one of the units may be split into multiple units. Alternatively, at least part of the functionality of one or more of these units may be combined with at least part of the functionality of other units and implemented in one unit. According to an embodiment of the present disclosure, at least one of the parsing packet header and the storage logic unit in the distributor may be implemented at least partially as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in hardware or firmware by any other reasonable manner of integrating or packaging a circuit, or may be implemented in any one of three implementations of software, hardware, and firmware, or in a suitable combination of any of them. Alternatively, at least one of the parsing packet header and the storage logic unit in the distributor may be at least partly implemented as a computer program module, which when executed may perform the respective function.

It should be noted that the message processing apparatus portion in the embodiment of the present disclosure corresponds to the message processing method portion in the embodiment of the present disclosure, and the description of the message processing apparatus portion specifically refers to the message processing method portion, which is not described herein again.

Fig. 9 schematically shows a block diagram of an electronic device adapted to implement the message processing method applied to the data center described above according to an embodiment of the present disclosure. The electronic device shown in fig. 9 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.

As shown in fig. 9, an electronic device 600 according to an embodiment of the present disclosure includes a processor 601 which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage section 608 into a Random Access Memory (RAM) 603. Processor 601 may include, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or associated chipset, and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), among others. The processor 601 may also include onboard memory for caching purposes. Processor 601 may include a single processing unit or multiple processing units for performing different actions of a method flow according to embodiments of the disclosure.

In the RAM 603, various programs and data necessary for the operation of the system 600 are stored. The processor 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. The processor 601 performs various operations of the method flows according to the embodiments of the present disclosure by executing programs in the ROM 602 and/or RAM 603. It is to be noted that the programs may also be stored in one or more memories other than the ROM 602 and RAM 603. The processor 601 may also perform various operations of the method flows according to embodiments of the present disclosure by executing programs stored in the one or more memories.

According to an embodiment of the present disclosure, system 600 may also include an input/output (I/O) interface 605, input/output (I/O) interface 605 also connected to bus 604. The system 600 may also include one or more of the following components connected to the I/O interface 605: an input portion 606 including a keyboard, a mouse, and the like; an output portion 607 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 608 including a hard disk and the like; and a communication section 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. The driver 610 is also connected to the I/O interface 605 as needed. A removable medium 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 610 as necessary, so that a computer program read out therefrom is mounted in the storage section 608 as necessary.

According to embodiments of the present disclosure, method flows according to embodiments of the present disclosure may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable storage medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 609, and/or installed from the removable medium 611. The computer program, when executed by the processor 601, performs the above-described functions defined in the system of the embodiments of the present disclosure. The systems, devices, apparatuses, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the present disclosure.

The present disclosure also provides a computer-readable storage medium, which may be contained in the apparatus/device/system described in the above embodiments; or may exist separately and not be assembled into the device/apparatus/system. The computer-readable storage medium carries one or more programs which, when executed, implement the method according to an embodiment of the disclosure.

According to an embodiment of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium. Examples may include, but are not limited to: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.

For example, according to embodiments of the present disclosure, a computer-readable storage medium may include the ROM 602 and/or RAM 603 described above and/or one or more memories other than the ROM 602 and RAM 603.

The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figure 601. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. Those skilled in the art will appreciate that various combinations and/or combinations of features recited in the various embodiments and/or claims of the present disclosure can be made, even if such combinations or combinations are not expressly recited in the present disclosure. In particular, various combinations and/or combinations of the features recited in the various embodiments and/or claims of the present disclosure may be made without departing from the spirit or teaching of the present disclosure. All such combinations and/or associations are within the scope of the present disclosure.

The embodiments of the present disclosure have been described above. However, these examples are for illustrative purposes only and are not intended to limit the scope of the present disclosure. Although the embodiments are described separately above, this does not mean that the measures in the embodiments cannot be used in advantageous combination. The scope of the disclosure is defined by the appended claims and equivalents thereof. Various alternatives and modifications can be devised by those skilled in the art without departing from the scope of the present disclosure, and such alternatives and modifications are intended to be within the scope of the present disclosure.

22页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:android设备网卡动态切换方法、系统、终端及存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!