Distributed load balancer health management using a data center network manager

文档序号:1967117 发布日期:2021-12-14 浏览:17次 中文

阅读说明:本技术 采用数据中心网络管理器的分布式负载均衡器健康状况管理 (Distributed load balancer health management using a data center network manager ) 是由 马尼什·钱德拉·阿格拉瓦 莎马尔·夏尔马 希亚姆·卡帕迪亚 卢卡斯·卡拉蒂格尔 于 2020-04-24 设计创作,主要内容包括:本公开的技术涉及一种负载均衡系统。该负载均衡系统被配置为在控制器处从多个叶交换机接收健康状况监视度量值。该负载均衡系统还被配置为基于健康状况监视度量值来确定服务器已经发生了故障,并修改网络架构的负载均衡配置。负载均衡系统还被配置为将负载均衡配置传输给网络架构中的每个叶交换机,并更新每个叶交换机中的表以反映可用的服务器。(The technology of the present disclosure relates to a load balancing system. The load balancing system is configured to receive health monitoring metric values from a plurality of leaf switches at a controller. The load balancing system is further configured to determine that a server has failed based on the health monitoring metric values and modify a load balancing configuration of the network architecture. The load balancing system is further configured to communicate the load balancing configuration to each leaf switch in the network fabric and update the table in each leaf switch to reflect the available servers.)

1. A method, comprising:

receiving, at a controller, health monitoring metric values from a plurality of load balancer leaf switches in a network fabric, wherein the health monitoring metric values from each of the plurality of load balancer leaf switches are associated with a local server managed by the leaf switch;

determining that a server in the network architecture has failed based on the health monitoring metric value; and

modifying a load balancing configuration of the network architecture.

2. The method of claim 1, wherein a particular load balancer leaf switch only probes one or more local servers to which the particular load balancer leaf switch is connected.

3. The method of claim 1 or 2, further comprising subscribing to a messaging service associated with the health monitoring metric value of the one or more servers.

4. The method of any of the preceding claims, wherein the plurality of load balancer leaf switches continuously publish health monitoring metric values for the local server.

5. The method of any of the preceding claims, further comprising sending the load balancing configuration to each of the plurality of load balancer leaf switches in the network fabric.

6. The method of any of the preceding claims, wherein sending the load balancing configuration to each of the plurality of load balancer leaf switches in the network fabric comprises:

in each load balancer leaf switch, updating an entry in a Static Random Access Memory (SRAM) table corresponding to a Ternary Content Addressable Memory (TCAM) table of a server in the network fabric that has failed.

7. The method of claim 6, wherein updating a Static Random Access Memory (SRAM) table in each load balancer leaf switch results in load balancing of client traffic of available active servers.

8. The method of claim 7, wherein the available active servers can be placed in a standby state by user configuration.

9. The method of any of the preceding claims, wherein the health monitoring metric values associated with local servers managed by the load balancer leaf switch comprise health monitoring metric values of services hosted on the local servers.

10. A system, comprising:

one or more processors; and

a computer-readable storage medium having instructions stored therein, which when executed by the one or more processors, cause the one or more processors to perform operations comprising:

receiving, at a controller, health monitoring metric values from a plurality of load balancer leaf switches in a network fabric, wherein the health monitoring metric values from each of the plurality of load balancer leaf switches are associated with local services managed by the leaf switch;

determining that one or more servers in a fabric have failed based on the health monitoring metric values;

modifying a load balancing configuration of the network architecture; and

sending the load balancing configuration to each of the plurality of load balancer leaf switches in the network fabric.

11. The system of claim 10, wherein the controller is an application installed on a server in communication with the leaf switch over a management network.

12. The system of claim 11, wherein the management network tracks the performance of the entire external network.

13. The system of any of claims 10 to 12, wherein each of the plurality of load balancer leaf switches has a Ternary Content Addressable Memory (TCAM) table and a Static Random Access Memory (SRAM) table.

14. The system of any of claims 10 to 13, wherein a load balancer leaf switch connected to a server in the network fabric that has failed issues a notification to the controller.

15. The system of claim 14, wherein the controller sends a message to each of the plurality of load balancer leaf switches to modify a Static Random Access Memory (SRAM) entry of a server in the network fabric that has failed in an SRAM table corresponding to a Ternary Content Addressable Memory (TCAM) table.

16. A computer-readable medium carrying one or more sequences of instructions which, when executed by one or more processors, causes the one or more processors to perform the steps of:

receiving, at a controller, a health monitoring metric value from one or more load balancer leaf switches in a network fabric;

determining that one or more service nodes in the fabric have failed;

modifying a load balancing configuration of the one or more service nodes in a fabric based on the health monitoring metric values;

sending the load balancing configuration to each of the one or more load balancer leaf switches in the network fabric; and

updating a Static Random Access Memory (SRAM) table of each leaf switch, the SRAM table corresponding to a Ternary Content Addressable Memory (TCAM) table of a server in the network fabric that has failed.

17. The computer-readable medium of claim 16, wherein a particular load balancer leaf switch only probes one or more local service nodes to which the particular load balancer leaf switch is connected.

18. The computer-readable medium of claim 16 or claim 17, wherein the controller subscribes to health monitoring metric values for the one or more service nodes.

19. The computer-readable medium of any of claims 16-18, wherein the one or more load balancer leaf switches continuously publish health monitoring metric values for the one or more service nodes.

20. The computer-readable medium of any of claims 16 to 20, wherein updating a Ternary Content Addressable Memory (TCAM) table and a Static Random Access Memory (SRAM) table in each load balancer leaf switch results in load balancing of client traffic of available active service nodes.

Technical Field

The subject matter of the present disclosure relates generally to the field of data center networks, and more particularly to load balancing within a distributed data center network.

Background

A typical data center network contains a myriad of network elements including servers, load balancers, routers, switches, and the like. Load balancing devices may be used to distribute workload among multiple nodes. The load balancing device may include a health monitoring application to determine a status associated with a node, such as availability of the node for receiving workload. The load balancing device may determine the status by periodically probing the nodes.

In a distributed data center, servers and Virtual Machines (VMs) may be distributed throughout the fabric and attached to different leaf switches. Distributed load balancing enables load balancing of servers distributed across the architecture. Each leaf switch must probe the server to know the state of each server connected to all other leaf switches, which results in a large amount of control traffic being injected into the fabric.

SUMMARY

In examples of known computing systems, a workload server cluster may be provided as a physical server or virtual machine to provide desired characteristics to end users or clients. Load balancing is required to implement the above-described functions in the independent switching fabric. The techniques of this disclosure address the need in the art for a more efficient way of distributed load balancer management, where client requests are distributed across multiple application servers. The techniques of this disclosure allow a controller to probe each leaf switch, while each leaf switch probes only its local server or servers. The controller monitors, tracks and reports the health of the server.

Various embodiments of the subject technology address these and other technical problems by providing a controller that performs load balancing by subscribing to health monitoring metric values provided by leaf switches. The controller operates through an application running on the server. The controller may communicate with the switching fabric through a management network within a larger network framework.

Detailed Description

Various embodiments of the present disclosure are discussed in detail below. While specific implementations are discussed, it should be understood that this is done for illustrative purposes only. A person skilled in the relevant art will recognize that other components and configurations may be used without parting from the spirit and scope of the disclosure.

It is desirable to be able to monitor the health of servers or applications connected to another leaf switch in the fabric without probing each server. When each leaf switch in the fabric probes all servers in the fabric, regardless of whether they are connected to the fabric, a large amount of control traffic is injected into the fabric, resulting in a large amount of bandwidth being occupied.

Various embodiments relate to a load balancing device configured to detect, monitor, track, and report the health of a server or an application running on the server. The load balancing device receives the health monitoring metric, determines that the server is down, and modifies the load balancing configuration on the network architecture. The health monitoring metric is obtained by the controller probing each leaf switch in the fabric. Each leaf switch in the fabric only probes the servers to which it is connected.

By including a load balancing function in the controller, this reduces traffic in the access layer of the architecture, which may increase available capacity and reduce packet loss rates in the data center.

Fig. 1 is a simplified schematic diagram illustrating an example data center network 100 in which systems and/or methods described herein may be implemented. As shown in fig. 1, an exemplary data center network 100 may include leaf switches 110 connected to one or more host servers 120, each of which may run a set of virtual machines 130. The host server 120 communicates through a particular subnet, which results in the subnet being installed in the attached leaf switch 110.

A virtual extensible local area network (VXLAN) or other encapsulation may be used to implement overlay network 140. In some embodiments, data packets are communicated from one end device to another end device over the data center network 100. The network in fig. 1 has leaf switches 110 and spine switches 140, as well as service nodes (e.g., host servers 120, VMs, containers, microservices, application units, etc.). Each spine switch 160 is connected to all leaf switches 110. The host server 120 is connected to the leaf switch 110.

The data center network 100 shown in fig. 1 may contain mappings such that spine switch 160 knows to which leaf switch 110 host server 120 is attached. It should be understood that leaf switch 110 and spine switch 160 may have switching or routing capabilities. In one example, the spine switch 160 acts as a route reflector in the network fabric of the data center network 100.

The distributed Internet Protocol (IP) anycast gateway may be at the leaf layer or the access layer. The architecture is based on a leaf-spine topology. There are border leaf switches 150 that connect the fabric to the external network. A spine switch 160 with border functions may also be used.

The table is maintained in leaf switch 110 in the forwarding logic. In some embodiments, these are similar to forwarding tables maintained in legacy networks. Encapsulation allows a network administrator to move a host server 120 from one leaf switch 110 to another leaf switch 110. In various embodiments, only the table of leaf switches 110 knows the identity details of host server 120.

Border leaf switches 150 may connect different data centers to the IP backbone, allowing layer 2 traffic to be sent and received between one data center and another. To this end, in the data center, the leaf routers each act as a VxLAN Tunnel Endpoint (VTEP). The VTEPs create and terminate VXLAN segments, where each VTEP maps host server 120 to a VXLAN segment and performs VXLAN encapsulation and decapsulation.

In fig. 1, to support overlay network 140, leaf switch 110 is configured as a VTEP that creates and terminates VXLAN segments defining overlay network 140. For example, leaf switch 110 performs VXLAN encapsulation and decapsulation and maps host server 120 to VXLAN segments. Typically, the subnet is a layer 3 structure and the VXLAN segment is a layer 2 structure.

Host server 120 may communicate on different subnets and may assign network addresses on the different subnets. The larger network may be any type of fabric (such as a switched fabric) that employs the Border Gateway Protocol (BGP) as a control plane to advertise IP reachability within the larger network. The architecture can achieve optimal layer 2 and layer 3 forwarding by distributing IP reachability information over the control plane, which enables a distributed IP anycast gateway at the leaf or access layer.

As used herein, the term "subnet" is a logical grouping of connected network elements that share a series of consecutive IP addresses. "host server" 120 is any end device in a data center network. In the data center network, the host may be a server, a client, or both. A client, on the other hand, is a computer with software that enables it to send requests for specific services to the server 120.

Fig. 2 is a simplified schematic diagram illustrating a controller 200 in a data center network in accordance with various aspects of the subject technology. The controller 200 may have an application 210 on a server, a separate machine, a service hosted outside of a data center network, or implemented using some other configuration. The application 210 manages the individual components of the network within a larger network management framework and performs several key functions. An application 210 on the controller 200 identifies, configures, monitors, updates, and overhauls routers, switches, and other network components by collecting health metric values. The health metric may include the number of service connections, the number of data packets sent to or by the service, the response time of the service response request, and the usage of network bandwidth by the service.

In some embodiments, the health metric value may be an indication of whether the service node failed. The controller 200 may obtain the health metric values for the service nodes in the fabric by utilizing each leaf switch in the service probe data center. In a service, a publisher application creates a message and sends it to a topic. The publisher application is each of the plurality of leaf switches 110 and the subscriber is the controller 200 on which the application 210 is running. The subscriber application creates a subscription to the topic to receive messages from the topic. The communication may be from one controller to multiple leaf switches. The publisher application creates a topic in the service and sends a message to the topic. The message contains a payload and optional attributes describing the content of the payload. The service forwards messages from the topics to which the controller subscribes to messages. The service receives the messages, either pushing them to the endpoints of the subscriber's choice or extracting them from the service by the subscriber. In some embodiments, the controller 200 may send requests to the leaf switches 110 and receive requested data without subscribing to metric values published by the leaf switches 110.

A mapping of buckets to service nodes may be maintained by one or more switches in the fabric. For example, spine switch 160 and leaf switches 110 in the fabric may include software that defines a mapping of service or hash buckets to service nodes. When a service node for a data flow fails, software on the switch may direct the data flow to an available active node. The mapping of buckets to service nodes may also be coordinated between the service nodes and the fabric, including one or more switches in the fabric. In some embodiments, leaf switch 110 may be configured to direct or redirect traffic based on a mapping of buckets to service nodes, which creates a packet path for each bucket's traffic forwarding. This may include a one-to-one mapping of traffic buckets to service nodes, or a many-to-one mapping of traffic buckets to service nodes. The mapping may be contained in a Ternary Content Addressable Memory (TCAM) table 500 associated with a Static Random Access Memory (SRAM) table 510, which will be discussed further in fig. 5.

When an incoming request is received by the leaf switch 110, the controller 200 operates to execute an appropriate algorithm (e.g., a round-robin scheduling algorithm, a minimum connection algorithm, a minimum traffic algorithm, a source IP algorithm, etc.) to assign the incoming request to the server 120. The controller 200 communicates the assignments by propagating information to the plurality of leaf switches 110. After the servers 120 are allocated, the controller 200 modifies the load balancing configuration by modifying the hardware programming on all leaf switches 110 to reflect the available active servers that replace the failed servers.

When the user has not configured the server 120, the controller 200 may employ different load balancing algorithms to modify the load balancing configuration in the network architecture. An algorithm will assign the service request to the next server in the sequence. Alternatively, an algorithm would measure the load on each server to determine which server has the most available resources to service the request, and would send the new request to the server with the least connection to the current client. In another algorithm, the IP address of the client is used to determine which server receives the request.

Fig. 3 is a simplified schematic diagram illustrating a controller 200 in a data center network 100 configured with backup nodes in accordance with aspects of the subject technology. In this example, the border leaf switch 150L4 is configured to the host server 120S 5. When any other host server 120 connected to any other leaf switch 110 fails, the controller 200 will configure the host server 120S5 to receive incoming traffic. The controller 200 will send a message with the address of the failed server to all leaf switches 110 in the fabric and indicate the previously configured available active servers.

Fig. 4 is a simplified schematic diagram illustrating a controller 200 in a data center network 100 without a configured backup node in accordance with aspects of the subject technology. In this example, no border leaf switch 150L4 is configured to any host server 120. When any other host server 120 connected to any other leaf switch 110 fails, the controller 2000 configures the host server 120SO to receive incoming traffic. The host server may be selected by various load balancing algorithms described above.

Fig. 5 illustrates a Ternary Content Addressable Memory (TCAM) table 500 and a Static Random Access Memory (SRAM) table 510 thereof in each leaf switch 110, in accordance with aspects of the subject technology. The controller 200 can be communicatively coupled to a plurality of leaf switches 110 that are communicatively coupled to the TCAM500 and the SRAM 510. The TCAM500 and SRAM510 may be configured to provide the high speed searching disclosed herein. The TCAM500 and the SRAM510 are configured to perform load balancing techniques under the direction of the controller 200.

Most top-of-rack (ToR) switches that perform forwarding and policing functions in data center networks utilize specialized content addressable memory to store rules. The memory is housed within a switch ASIC (application specific integrated circuit), which enables hardware-based packet forwarding.

The CPU or processor receives a configuration request from the controller 200 to program the TCAM500 and the SRAM 510. Based on the contents of the TCAM500 and the SRAM510, the ASIC directs data packets input at one interface to the other interface.

TCAM500 is made up of a number of entries, and when it is given an input string, it will compare the string to all entries and report the first entry matching the input. A TCAM is a fully associative memory in which not only the binary "1" or "0" of an input but also the ternary "X" (not of interest) can be searched. For example, the search tag "110X" matches "1101" and "1100". Given a request, a TCAM500 lookup is performed, and then all matching results are retrieved using a two-level SRAM510 lookup.

By loading the forwarding table prefixes into TCAM500 in order of decreasing prefix length, the TCAM500 index of the longest matching prefix for any destination address can be determined in one TCAM500 cycle. With this index, the word of SRAM510 can be accessed, where the next hop associated with the matching prefix is stored and the forwarding task is completed. The TCAM500 solution for packet forwarding requires one TCAM500 search and one SRAM510 access to forward the packet.

The TCAM500 provided can run at speeds approaching that of the programmable hardware itself. The TCAM500 compares the search input to a stored data table. The TCAM500 looks up the IP address present in the data flow. The controller 200 assigns each data stream to a service node, such as a host server. The TCAM500 may include a table that maps traffic buckets to servers. The controller 200 rewrites the L2 header for incoming packets to direct them to the leaf switch 110. The controller 200 acts as a switch by switching or routing data packets to the leaf switch provided by its new L2 header.

In this example, TCAM500 stores data describing the attributes of a data packet to be matched, while SRAM510 stores data describing the action to be taken when a corresponding match occurs in SRAM 510. If the IP address of the packet is within the range indicated by the TCAM500 entry, then the action to take is to direct the packet to the server listed in the SRAM510 entry corresponding to the TCAM500 entry. For example, if the IP address of the packet is within the range indicated by XX00X, then traffic is directed to S0. The load balancing function is implemented by configuring entries stored in the TCAM500 and SRAM510 to enable directing packets to specific available active servers as needed.

Fig. 6 is a flow diagram of a method 600 performed by the controller 200 in accordance with various aspects of the subject technology. It should be understood that for any of the methods discussed herein, additional, fewer, or alternative steps may be performed in a similar or alternative order or in parallel within the scope of the various embodiments, unless otherwise indicated. Method 600 may be performed by a controller 200, such as a Data Center Network Manager (DCNM) or similar system.

At step 602, the controller 200 may receive health metric values from a plurality of leaf switches 110 in a network fabric. The health monitoring metric values received from each leaf switch 110 are associated with the local servers 120 managed by the particular leaf switch 110. Any particular leaf switch 110 in the network fabric may probe the service node (server 120, VM, container, application, service, process, or any other processing/computing unit) to which the particular leaf switch 110 is connected. In some embodiments, a leaf switch 110 may not need to prove a server 120 that is not installed to the leaf switch. The probing mechanism may be in the form of an Internet Control Message Protocol (ICMP) request (e.g., ping and traceroute) and an ICMP response.

At step 604, the controller 200 will determine that one or more servers 120 in the network architecture have failed. This may be communicated to the controller 200 by the particular leaf switch 110 that is managing the failed server. The controller 200 subscribes to messages published by a plurality of leaf switches 110 in the network fabric. Each leaf switch 110 continually or continuously publishes health monitoring metric values for the one or more servers 120 that they are managing.

At step 606, the controller 200 may modify the load balancing configuration of the network architecture after determining that the server has failed. For example, the controller 200 may propagate information about the failed server 120 received by a particular leaf switch 110 to all other leaf switches 110 in the network fabric. When a server 120 fails, the hardware programming on all leaf switches 110 is modified to reflect the available active servers that replace the failed server.

At operation 608, the controller 200 sends the modified load balancing configuration to each of the plurality of leaf switches 110 in the network fabric. The leaf switch 110 then updates the SRAM510 entry corresponding to the TCAM500 entry for the failed server. The TCAM500 and SRAM510 tables are located in each leaf switch 110 in the network fabric. All tables belonging to leaf switches 110 in the network fabric will be modified. The TCAM500 table will determine if there is a match in the address and will send the packet to the particular available active server, rather than the failed server, by modifying the SRAM510 entry corresponding to the address.

Fig. 7A and 7B illustrate a system according to various embodiments. More suitable systems will be apparent to those of ordinary skill in the art when implementing various embodiments. One of ordinary skill in the art will also readily recognize that other systems are possible.

Fig. 7A illustrates an exemplary architecture of a conventional bus computing system 700 in which components of the system are in electrical communication with each other using a bus 705. Computing system 700 may include a processing unit (CPU or processor) 710 and a system bus 705 that may couple various system components including a system memory 715, such as Read Only Memory (ROM) and Random Access Memory (RAM)725 in storage device 720, to processor 710. The computing system 700 may include a cache 712 of a high-speed memory, the cache 712 being directly connected to, in close proximity to, or integrated as part of the processor 710. The computing system 700 may copy data from the memory 715 and/or the storage 730 to the cache 712 for quick access by the processor 710. In this way, the cache 712 may achieve a performance boost that avoids processor delays while waiting for data. These and other modules may control or be configured to control processor 710 to perform various actions. Other system memory 715 may also be available. The memory 715 may include a variety of different types of memory having different performance characteristics. Processor 710 may include any general-purpose processor and hardware or software modules configured to control processor 710, such as modules 1732, 2734, and 3736 stored in storage device 730, as well as special-purpose processors, with software instructions incorporated into the actual processor design. Processor 710 may be essentially an entirely separate computing system containing multiple cores or processors, buses, memory controllers, caches, and the like. The multi-core processor may be symmetric or asymmetric.

To enable user interaction with computing system 700, input device 745 may represent any number of input mechanisms, such as a microphone for voice, a touch-protected screen for gesture or graphical input, a keyboard, a mouse, motion input, voice, and so forth. The output device 735 may also be one or more of a number of output mechanisms known to those skilled in the art. In some cases, a multimodal system may enable a user to provide multiple types of input to communicate with computing system 700. Communication interface 740 may control and manage user inputs and system outputs. There may be no limitations to the operation on any particular hardware configuration, and thus the basic features herein may be readily replaced by developed, improved hardware or firmware configurations.

The storage device 730 may be a non-volatile memory and may be a hard disk or other type of computer-readable medium capable of storing data accessible by a computer, such as magnetic cassettes, flash memory cards, solid state storage devices, digital versatile disks, magnetic cassettes, RAM 725, Read Only Memory (ROM)720, and combinations thereof.

The storage 730 may include software modules 732, 734, 736 for controlling the processor 710. Other hardware or software modules are contemplated. A storage device 730 may be connected to the system bus 705. In one aspect, a hardware module performing a particular function may include software components stored in a computer-readable medium coupled with necessary hardware components (such as processor 710, bus 705, output device 735, etc.) to perform that function.

FIG. 7B illustrates an exemplary architecture of a conventional chipset computing system 750 that may be used in accordance with one embodiment. Computing system 750 may include a processor 755 representing any number of physically and/or logically distinct resources capable of executing software, firmware, and hardware configured to perform the identified calculations. Processor 755 may communicate with a chipset 760, which may control the input and output of processor 755. In this example, the chipset 760 may output information to an output device 765 (such as a display) and may read and write information to a storage device 770, which may include, for example, magnetic media and solid state media. Chipset 760 may also read data from and write data to RAM 775. A bridge 780 may be provided for interfacing with various user interface components 785 for interfacing with chipset 760. The user interface components 785 may include a keyboard, a microphone, touch detection and processing circuitry, a pointing device (such as a mouse), and the like. The input to computing system 750 may come from any of a variety of sources (machine-generated sources and/or human-generated sources).

Chipset 760 may also interface with one or more communication interfaces 790, which may have different physical interfaces. Communication interface 790 may include interfaces to wired and wireless Local Area Networks (LANs), broadband wireless networks, and personal area networks. Some applications of the methods disclosed herein for generating, displaying, and using a Graphical User Interface (GUI) may include receiving an ordered set of data through a physical interface, or may be generated by the machine itself through the processor 755 analyzing data stored in the storage device 770 or the RAM 775. Further, the computing system 700 may receive inputs from a user via the user interface component 785 and perform appropriate functions (such as browsing functions) by interpreting these inputs using the processor 755.

It should be appreciated that computing systems 700 and 750 may have more than one processor 710 and 755, respectively, or may be part of a group or cluster of computing devices networked together to provide greater processing power.

The methods according to the examples described above may be implemented using computer-executable instructions stored in or otherwise available from computer-readable media. These instructions may include, for example, instructions and data which cause or otherwise configure a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. Portions of the computer resources used may be accessible over a network. The computer-executable instructions may be, for example, binary instructions, intermediate format instructions (such as assembly language), firmware, or source code. Examples of computer readable media that may be used to store instructions, information used, and/or information created during a method according to the described examples include magnetic or optical disks, flash memory, USB devices equipped with non-volatile memory, networked storage devices, and so forth. Devices implementing methods according to these disclosures may include hardware, firmware, and/or software, and may take any of a variety of form factors. Typical examples of such form factors include notebook computers, smart phones, small personal computers, personal digital assistants, rack-mounted devices, standalone devices, and the like. The functionality described herein may also be embodied in a peripheral device or expansion card. As another example, such functionality may also be implemented in different chips on a circuit board or in different processes executing in a single device.

The instructions, the medium for communicating the instructions, the computing resources for executing the instructions, and other structures for supporting the computing resources are means for providing the functionality described in these publications.

While various examples and other information are used to describe various aspects within the scope of the following claims, no limitation to the claims should be implied based on the particular features or configurations in the examples, as one of ordinary skill would be able to derive various implementations with the examples. Furthermore, although some subject matter may have been described in language specific to examples of structural features and/or methodological steps, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the described features or acts. For example, such functionality may be distributed in different ways or performed in components other than those identified herein. Rather, the described features and steps are disclosed as examples of components of systems and methods within the scope of the appended claims.

Drawings

In order to describe the manner in which the above-recited and other advantages and features of the disclosure can be obtained, a more particular description of the principles briefly described above will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. It is appreciated that these drawings depict only typical embodiments of the disclosure and are therefore not to be considered limiting of its scope. A more particular description and illustration of the principles herein is rendered by reference to the appended drawings, in which:

FIG. 1 is a simplified schematic diagram illustrating an exemplary data center network in which systems and/or methods described herein may be implemented;

FIG. 2 is a simplified schematic diagram illustrating a controller in a data center network in accordance with aspects of the subject technology;

FIG. 3 is a simplified schematic diagram illustrating a controller in a data center network configured with backup nodes in accordance with aspects of the subject technology;

FIG. 4 is a simplified schematic diagram illustrating a controller in a data center network without a configured backup node in accordance with aspects of the subject technology;

FIG. 5 illustrates a TCAM (ternary content addressable memory) table and its SRAM (static random access memory) table in each leaf switch, in accordance with aspects of the subject technology;

FIG. 6 is a flow diagram of a method performed by a controller in accordance with aspects of the subject technology; and

fig. 7A and 7B illustrate examples of systems in accordance with various aspects of the subject technology.

Detailed Description

The detailed description set forth below is intended as a description of various configurations of embodiments and is not intended to represent the only configurations in which the presently disclosed subject matter may be practiced. The accompanying drawings are incorporated herein and constitute a part of the detailed description. The detailed description includes specific details for a more thorough understanding of the disclosed subject matter. It may be evident, however, that the subject matter of the present disclosure is not limited to the specific details set forth herein, and may be practiced without these details. In some instances, structures and components are shown in block diagram form in order to avoid obscuring the concepts of the subject matter of the present disclosure.

17页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种显示设备之间的交互方法及显示设备

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!