Computing device and method for generating an architecture-wide IPV6 address

文档序号:1559816 发布日期:2020-01-21 浏览:5次 中文

阅读说明:本技术 用于生成架构范围的ipv6地址的计算设备和方法 (Computing device and method for generating an architecture-wide IPV6 address ) 是由 P·安德森 B·特朗布莱 S·克里希南 L·马钱德 于 2019-03-04 设计创作,主要内容包括:用于在包括多个架构的数据中心中生成架构范围的IPv6地址的计算设备和方法。配置文件存储在计算设备的存储器中。配置文件包括互联网协议版本6(IPv6)基本前缀和架构标识符。计算设备的处理单元确定主机标识符。处理单元通过组合存储在配置文件中的IPv6基本前缀和存储在配置文件中的架构标识符来生成IPv6前缀。处理单元通过组合IPv6前缀和主机标识符来生成IPv6地址。处理单元还通告所生成的IPv6地址。(A computing device and method for generating an architecture-wide IPv6 address in a data center that includes multiple architectures. The configuration file is stored in a memory of the computing device. The configuration file includes an internet protocol version 6(IPv6) basic prefix and an architecture identifier. A processing unit of a computing device determines a host identifier. The processing element generates an IPv6 prefix by combining the IPv6 base prefix stored in the configuration file and the architecture identifier stored in the configuration file. The processing unit generates an IPv6 address by combining the IPv6 prefix and the host identifier. The processing element also advertises the generated IPv6 address.)

1. A computing device, comprising:

a memory for storing a configuration file, the configuration file including an Internet protocol version 6(IPv6) basic prefix and an architecture identifier; and

a processing unit to:

determining a host identifier;

generating an IPv6 prefix by combining the IPv6 base prefix stored in the configuration file and the architecture identifier stored in the configuration file; and

generating an IPv6 address by combining the IPv6 prefix and the host identifier.

2. The computing device of claim 1, wherein the processing element is further to advertise the generated IPv6 address.

3. The computing device of claim 1, wherein the IPv6 address begins with the IPv6 base prefix, followed by the architecture identifier, followed by an optional zero, and ends with the host identifier.

4. The computing device of claim 1, wherein the IPv6 basic prefix is a/48 prefix.

5. The computing device of claim 1, wherein the fabric identifier is a 16-bit integer.

6. The computing device of claim 1, wherein the IPv6 prefix is a Unique Local Address (ULA) prefix or a public prefix.

7. The computing device of claim 1, wherein the host identifier is a 48-bit integer in hexadecimal format.

8. The computing device of claim 7, wherein the host identifier is a Media Access Control (MAC) address.

9. The computing device of claim 1, wherein the processing unit is to determine the host identifier by computing a hash of a 128-bit Universally Unique Identifier (UUID) of the computing device.

10. The computing device of claim 1, wherein the IPv6 basic prefix and the architecture identifier are received from a configuration device via a communication interface of the computing device and are further stored in the configuration file.

11. A method for generating an architecture-wide IPv6 address, comprising:

storing a configuration file in a memory of a computing device, the configuration file comprising an internet protocol version 6(IPv6) basic prefix and an architecture identifier;

determining, by a processing unit of the computing device, a host identifier;

generating, by the processing unit, an IPv6 prefix by combining the IPv6 basic prefix stored in the configuration file and the architecture identifier stored in the configuration file; and

generating, by the processing unit, an IPv6 address by combining the IPv6 prefix and the host identifier.

12. The method of claim 11, wherein the processing element further advertises the generated IPv6 address.

13. The method of claim 11, wherein the IPv6 address begins with the IPv6 basic prefix, followed by the architecture identifier, followed by an optional zero, and ends with the host identifier.

14. The method apparatus of claim 11, wherein the IPv6 basic prefix is a/48 prefix.

15. The method of claim 11, wherein the fabric identifier is a 16-bit integer.

16. The method of claim 11, wherein the host identifier is a 48-bit integer in hexadecimal format.

17. The method of claim 16, wherein the host identifier is a Media Access Control (MAC) address.

18. The method of claim 11, wherein the processing unit determines the host identifier by computing a hash of a 128-bit Universally Unique Identifier (UUID) of the computing device.

19. The method of claim 11, wherein the IPv6 basic prefix and the architecture identifier are received from a configuration device via a communication interface of the computing device and are also stored in the configuration file.

20. A non-transitory computer program product comprising instructions executable by a processing unit of a computing device, execution of the instructions by the processing unit providing an IPv6 address generating an architectural scope by:

storing a configuration file in a memory of the computing device, the configuration file comprising an Internet protocol version 6(IPv6) basic prefix and an architecture identifier; and

determining, by the processing unit, a host identifier;

generating, by the processing unit, an IPv6 prefix by combining the IPv6 basic prefix stored in the configuration file and the architecture identifier stored in the configuration file; and

generating, by the processing unit, an IPv6 address by combining the IPv6 prefix and the host identifier.

Technical Field

The present disclosure relates to the field of data centers. More particularly, the present disclosure relates to a computing device and method for generating fabric-wide (fabric-wide) IPv6 addresses in a data center including a plurality of fabrics.

Background

In recent years, rapid development of technologies such as software as a service (SaaS), cloud computing, and the like has been witnessed. This development benefits from the ever-increasing customer demand for products and services based on such technologies. Continued advances in underlying technologies have also provided a impetus for such developments, such as increased processing power of microprocessors, increased storage capacity of storage devices, and increased transmission capacity of network equipment. In addition, the average cost of these underlying technologies is decreasing. However, the drop in average cost of the underlying technology is balanced by growing customer demand, which requires constant updating and upgrading of the infrastructure used to provide SaaS or cloud computing.

The infrastructure for providing SaaS or cloud computing is a data center that combines a large number of computing servers. Each server has multiple multi-core processors, and the combination of compute servers provides very high processing power used by the customers of the data center. Some or all of these servers may also have significant storage capacity, so the combination of servers also provides very high storage capacity to the customers of the data center. Data centers also rely on networking infrastructure for interconnecting servers and providing access to their computing and/or storage capacity to the data center's customers. In order to provide reliable services, computing and networking infrastructures of data centers have been subject to very high scalability, manageability, fault tolerance, etc.

With respect to networking infrastructure of data centers, it is known to provide efficient and reliable networking services to a large number of hostsTasks are a complex task. Solutions and techniques have been developed in other environments, such as networking technologies for providing mobile data services to a large number of mobile devices. Some of these techniques are already in dedicated cases (such as the internet engineering task force: (a)

Figure BDA0002309934040000021

) Or the third Generation partnership project (3 GPP)TM) Normalized). However, at least some of the technical challenges of deploying an efficient and reliable networking infrastructure in a data center are specific to the data center environment; and needs to be addressed with the original solutions and techniques.

One significant challenge for networking infrastructure involving large numbers (e.g., thousands) of devices is the configuration of the networking infrastructure. In particular, the configuration needs to be flexible (to facilitate changing the initial configuration) and resilient (to avoid localized configuration errors affecting the operation of the entire data center). One way to ensure that the network configuration is flexible and reliable is to limit human intervention in the configuration process as much as possible.

Accordingly, there is a need for a computing device and method for generating an architecture-wide IPv6 address in a data center that includes multiple architectures.

Disclosure of Invention

According to a first aspect, the present disclosure is directed to a computing device. The computing device includes a memory for storing a configuration file. The configuration file includes an internet protocol version 6(IPv6) basic prefix and an architecture identifier. The computing device includes a processing unit to determine a host identifier. The processing element also generates an IPv6 prefix by combining the IPv6 base prefix stored in the configuration file and the architecture identifier stored in the configuration file. The processing unit also generates an IPv6 address by combining the IPv6 prefix and the host identifier.

According to a second aspect, the present disclosure is directed to a method for generating an architecture-wide IPv6 address in a data center including a plurality of architectures. The method includes storing a configuration file in a memory of a computing device. The configuration file includes an internet protocol version 6(IPv6) basic prefix and an architecture identifier. The method includes determining, by a processing unit of a computing device, a host identifier. The method includes generating, by a processing element, an IPv6 prefix by combining an IPv6 base prefix stored in a configuration file and an architecture identifier stored in the configuration file. The method also includes generating, by the processing element, the IPv6 address by combining the IPv6 prefix and the host identifier.

According to a third aspect, the present disclosure provides a non-transitory computer program product comprising instructions executable by a processing unit of a computing device. Execution of the instructions by the processing units provides IPv6 addresses that generate an architectural range. More specifically, execution of the instructions provides for storing the configuration file in a memory of the computing device. The configuration file includes an internet protocol version 6(IPv6) basic prefix and an architecture identifier. Execution of the instructions provides for determining, by the processing unit, the host identifier. Execution of the instructions provides for generating, by the processing unit, an IPv6 prefix by combining the IPv6 base prefix stored in the configuration file and the architecture identifier stored in the configuration file. Execution of the instructions further provides for generating, by the processing unit, an IPv6 address by combining the IPv6 prefix and the host identifier.

Drawings

Embodiments of the present disclosure will now be described, by way of example only, with reference to the accompanying drawings, in which:

FIG. 1 illustrates a network architecture of a data center including a plurality of deployment points (pods) and an architecture;

FIG. 2 shows a more detailed view of the architecture shown in FIG. 1;

FIG. 3 shows communication ports of equipment deployed in the deployment site and architecture of FIGS. 1-2;

FIGS. 4A and 4B illustrate an IPv6 network for interconnecting the equipment of the architecture shown in FIGS. 1-3;

FIG. 5 illustrates a schematic diagram of a computing device corresponding to the equipment deployed in the architecture shown in FIGS. 1-3; and is

FIG. 6 illustrates a method for generating an architecture-wide IPv6 address.

Detailed Description

The foregoing and other features will become more apparent upon reading of the following non-limiting description of illustrative embodiments, given by way of example only, with reference to the accompanying drawings.

Aspects of the present disclosure generally address one or more problems associated with the generation of IPv6 addresses having a global scope within subsections of a data center; where generation of IPv6 addresses is automatic and resilient to human errors.

Network architecture of data center

Referring now to fig. 1-4 concurrently, network architectures of a data center are illustrated. The network architecture shown in the figures is for illustration purposes, and those skilled in the art of designing data center architectures will readily appreciate that other design choices may be made. The teachings of the present disclosure are not limited to the topology of the network architecture represented in the figures; but may also be applied to network architectures with different design choices in terms of topology.

Reference is now made more specifically to fig. 1. The data center is organized into a plurality of deployment points. Each deployment point consists of atomic units of computation, storage, networking, and power. Each deployment point is designed as a unit, deployed as a unit, automated as a unit, and decommissioned as a unit. Several types of deployment points may be used, which differ in their design. Zero, one, or multiple instances of each deployment point are deployed in the data center. For illustrative purposes, three deployment points (A, B and C) are shown in FIG. 1. However, the number of deployment points in a data center varies from one to tens to even hundreds of deployment points. By adding (or removing) deployment points, the capacity of the data center in terms of computation, storage, networking, and power can be scaled.

Deployment point a includes a plurality of servers 300 that provide processing and storage capabilities. Depending on the number of servers 300 and the capacity of each rack, the servers 300 are physically organized in one or more racks. Deployment point a also includes a two-tier networking capability referred to as architecture a. Architecture a includes a lower layer consisting of leaf networking equipment 200, and a higher layer consisting of backbone networking equipment 100. The networking equipment of architecture a (e.g., backbone 100 and leaves 200) is physically integrated onto one or more racks that comprise servers 300, or alternatively physically organized in one or more separate racks.

The leaf networking equipment 200 and the backbone networking equipment 100 are typically comprised of switches with a high density of communication ports. Thus, in the remainder of this description, leaf networking equipment 200 and backbone networking equipment 100 will be referred to as leaf switch 200 and backbone switch 100, respectively. However, other types of networking equipment may be used. For example, in an alternative embodiment, at least some of the backbone networking equipment 100 is comprised of routers.

Each leaf switch 200 is connected to at least one backbone switch 100 and a plurality of servers 300. The number of servers 300 connected to a given leaf switch 200 depends on the number of communication ports of the leaf switch 200.

In the embodiment shown in fig. 1, each server 300 is redundantly connected to two different leaf switches 200. The servers 300 of a given deployment point (e.g., deployment point a) are only connected to leaf switches 200 belonging to the fabric (e.g., fabric a) of the given deployment point (e.g., deployment point a). A server 300 of a given deployment point (e.g., deployment point a) is not connected to a leaf switch 200 of a fabric (e.g., fabric B) belonging to another deployment point (e.g., deployment point B). Each leaf switch 200 of a given fabric (e.g., fabric a) is connected to all backbone switches 100 of the given fabric (e.g., fabric a). The leaf switches 200 of a given fabric (e.g., fabric a) are not connected to the backbone switches 100 of another fabric (e.g., fabric B). In an alternative embodiment not shown in the figures, at least some of the servers 300 are connected to a single leaf switch 200.

Each backbone switch 100 is connected to at least one core networking equipment 10, and a plurality of leaf switches 200. The number of leaf switches 200 connected to a given backbone switch 100 depends on design choices and the number of communication ports of the backbone switch 100. The core networking equipment 10 provides interworking between the fabrics deployed in the data center, connection to management functions of the data center, connection to external networks (such as the internet), and the like. Furthermore, although not shown in the figure for simplicity, at least some of the core networking equipment 10 may be connected to a pair of leaf switches 200.

The core networking equipment 10 is typically comprised of routers. Thus, in the remainder of this description, the core networking equipment 10 will be referred to as core routers 10. However, other types of networking equipment may be used. For example, in an alternative embodiment, at least some of the core networking equipment 10 is comprised of switches.

In the embodiment shown in fig. 1, each backbone switch 100 of a given fabric (e.g., fabric a) is connected to all core routers 10 and to all leaf switches 200 of the given fabric (e.g., fabric a).

For simplicity, the architecture a shown in fig. 1 includes only two backbone switches 100 and four leaf switches 200, while the deployment point a includes only two sets of three servers 300, each connected to a leaf switch 200 of architecture a. The number of backbone switches 100 and leaf switches 200 of the fabric may vary based on the design choices and networking capabilities (e.g., communication port density) of the backbone and leaf switches. Similarly, the total number of servers 300 to deploy a point may vary based on design choice, based on the number of leaf switches 200 of the corresponding fabric, and based on the networking capabilities (e.g., communication port density) of the leaf switches.

For the sake of simplicity, the details of deployment point B and its corresponding architecture B and deployment point C and its corresponding architecture C are not shown in fig. 1. However, deployment point B/fabric B and deployment point C/fabric C comprise a hierarchy of backbone switches 100, leaf switches 200, and servers 300, similar to the hierarchy shown for deployment point a/fabric a.

Referring now more particularly to fig. 1 and 2, wherein fig. 2 represents an embodiment of the data center of fig. 1, wherein each architecture further includes one or more controllers 400. The server 300 is not shown in fig. 2 for simplicity purposes only.

The controller 400 of the fabric is responsible for controlling the operation of at least some of the nodes included in the fabric (e.g., the leaf switches 200 and/or the backbone switch 100). Each controller 400 is connected to at least one leaf switch 200. The number of controllers 400 deployed in a given fabric depends on design choices, the cumulative processing power required by the controllers 400 deployed in the fabric, the total number of leaf switches and backbone switches deployed in the fabric, and so on.

In the embodiment shown in fig. 2, each controller 400 is redundantly connected to two different leaf switches 200. For example, each controller 400 has a first operational connection to a first leaf switch 200 and a second alternate connection to a second leaf switch 200. The controller 400 of a given fabric (e.g., fabric a) is connected only to the leaf switches 200 of that fabric (e.g., fabric a). The controller 400 of a given fabric (e.g., fabric a) is not connected to a leaf switch 200 of another fabric (e.g., fabric B or C). Some of the leaf switches 200 are dedicated for connection to the controller 400 (as shown in figure 2) and other leaf switches 200 are dedicated for connection to the server 300 (as shown in figure 1). In an alternative embodiment, the leaf switch 200 is connected to both the server 300 and the controller 400.

In another embodiment, the controller 400 is not directly physically connected to the leaf switch 200; but rather are logically connected via at least one intermediary equipment such as an intermediary switch (not shown in figure 2) between the controller 400 and the leaf switch 200.

Reference is now made more specifically to fig. 1, 2 and 3, where fig. 3 represents the communication ports of equipment deployed in an architecture/deployment point.

The backbone switch 100 has a dedicated number of uplink communication ports (e.g., 4 as shown in fig. 3) dedicated to interconnection with the core routers 10 and a dedicated number of downlink communication ports (e.g., 6 as shown in fig. 3) dedicated to interconnection with the leaf switches 200. The uplink port and the downlink port have the same or different networking capacities. For example, all ports have a capacity of 10 gigabytes (Gbps).

The leaf switch 200 has a dedicated number of uplink communication ports (e.g., 3 shown in fig. 3) dedicated to interconnection with the backbone switch 100, and a dedicated number of downlink communication ports (e.g., 6 shown in fig. 3) dedicated to interconnection with the server 300 or the controller 400. The uplink port and the downlink port have the same or different networking capacities. For example, all uplink ports have a capacity of 100Gbps and all downlink ports have a capacity of 25 Gbps. In the future, the capacity of uplink ports will reach 200 or 400Gbps, while the capacity of downlink ports will reach 50Gbps or 100 Gbps.

Leaf switches and backbone switches are typically composed of equipment with high density of communication ports, which can reach tens of ports. Some of the ports may be electrical ports while others are fiber optic ports. As previously described, the ports of the switch may have varying networking capacities in terms of supported bandwidth. Leaf switches and backbone switches are typically implemented using switches with different networking capacities and functions. These ports are not limited to communication ports, but also include enclosures for connecting various types of pluggable media.

In contrast, the server 300 and the controller 400 are computing devices with a limited number of communication ports similar to a conventional computer. For example, each server 300 and each controller 400 includes two communication ports, each connected to two different leaf switches 200. The two communication ports are typically composed of ethernet ports with a capacity of, for example, 10 Gbps. However, the server 300 and/or the controller 400 may include additional port(s).

All of the above communication ports are bi-directional, allowing for transmission and reception of data.

Referring now more particularly to fig. 4A and 4B, these represent the deployment of IPv6 network 20 at the architectural level.

At least some of the equipment of the architecture connects to the IPv6 network 20 and exchanges data via the IPv6 network. In the configuration shown in fig. 4A and 4B, all of the backbone switches 100, leaf switches 200, and controllers 400 are connected to the IPv6 network 20. Each architecture (e.g., architectures A, B and C as shown in fig. 1) has its own IPv6 network, each with a dedicated IPv6 prefix. The generation of the private IPv6 prefix for a given architecture will be illustrated later in this description.

Optionally, additional equipment is connected to the IPv6 network 20. For example, as shown in fig. 4A and 4B, one or more of the core routers 10 are connected to an IPv6 network 20. A configuration and/or management server (not shown in fig. 4A and 4B for simplicity) has access to the IPv6 network 20 through the core router 10.

Optionally, dedicated switches and/or routers (not shown in fig. 4A and 4B for simplicity) are used to interconnect the equipment of fabric a that exchanges data via IPv6 network 20. The aforementioned optional configuration and/or management server has access to the IPv6 network 20 through a private switch and/or router.

Fig. 4A shows a first illustrative configuration in which each of the equipment of fabric a (backbone switch 100, leaf switch 200, and controller 400) has a dedicated port 21 for accessing IPv6 network 20. IPv6 network 20 is a configuration and/or management network that is isolated from other IP networks implemented by architecture a. The dedicated ports 21 of the backbone switch 100, leaf switch 200 and controller 400 are only used for exchanging data over the IPv6 network 20. Thus, IPv6 traffic exchanged via dedicated ports 21 of backbone switch 100, leaf switch 200, and controller 400 is isolated from traffic exchanged via other ports of backbone switch 100, leaf switch 200, and controller 400 (as shown in fig. 3).

Fig. 4B represents a second illustrative configuration in which each of the equipment of fabric a (backbone switch 100, leaf switch 200, and controller 400) does not use a dedicated port to access IPv6 network 20. Conversely, ports that have been used to exchange other data traffic (as shown in FIG. 3) are also used to access IPv6 network 20.

An advantage of this configuration is that dedicated ports are not monopolized at each equipment of fabric a (backbone switch 100, leaf switch 200, and controller 400) only for access to IPv6 network 20.

In an alternative configuration not shown in the figures, some equipment of architecture a is using dedicated ports to access IPv6 network 20; however, other equipment of architecture a accesses IPv6 network 20 through ports that are also used to exchange other data traffic.

In addition, some equipment of architecture a may use more than one port to access IPv6 network 20.

Referring now to fig. 4A-B and fig. 5 concurrently, a computing device 500 is illustrated in fig. 5. Computing device 500 is a general functional representation of the devices included in the architecture of fig. 4A and 4B. Thus, the computing device 500 represents a backbone switch 100, a leaf switch 200, or a controller 400.

Computing device 500 includes a processing unit 510, memory 520, and at least one communication interface 530. Computing device 500 may include additional components (not shown in fig. 5 for simplicity). For example, where computing device 500 represents controller 400, the computing device may include a user interface and/or a display.

Processing unit 510 includes one or more processors (not shown in fig. 5) capable of executing instructions of a computer program. Each processor may also include one or several cores. Where computing device 500 represents a switch 100 or 200, processing unit 510 also includes one or more special-purpose processing components (e.g., a network processor, an application-specific integrated circuit (ASIC), etc.) for performing specialized networking functions (e.g., packet forwarding).

Memory 520 stores instructions of the computer program(s) executed by processing unit 510, data generated by the execution of the computer program(s) by processing unit 510, data received via communication interface(s) 530, and so on. Only a single memory 520 is represented in fig. 5, but the computing device 500 may include several types of memory, including volatile memory (such as Random Access Memory (RAM)) and non-volatile memory (such as a hard disk drive, an erasable programmable read-only memory (EPROM), an electrically erasable programmable read-only memory (EEPROM), and so forth).

Each communication interface 530 allows computing device 500 to exchange data with other devices. At least some of the communication interfaces 530 (only two shown in fig. 5 for simplicity) correspond to ports of the spine switch 100, leaf switch 200, and controller 400 represented in fig. 4A and 4B. Examples of communication interface 530 include a standard (electrical) ethernet port, a fiber optic port, a port adapted to receive a small form factor pluggable (SFP) unit, and the like. Communication interface 530 is typically a wired type; but may also include some wireless type (e.g., a Wi-Fi interface). Communication interface 530 includes a combination of hardware and software executed by hardware to implement the communication functions of communication interface 530. Alternatively, a combination of hardware and software for implementing the communication functions of communication interface 530 is at least partially included in processing unit 510.

Architecture-wide IPv6 addresses

Referring now to fig. 4A-B, 5 and 6 concurrently, a method 600 for generating an architecture-wide IPv6 address is illustrated in fig. 6. At least some of the steps of method 600 are performed by computing device 500 represented in fig. 5.

A special purpose computer program has instructions for implementing at least some of the steps of method 600. The instructions are contained in a non-transitory computer program product (e.g., memory 520) of the computing device 500. The instructions, when executed by the processing unit 510 of the computing device 500, provide for generating an architecture-wide IPv6 address. The instructions may be communicated to computing device 500 via an electronically-readable medium, such as a storage medium (e.g., CD-ROM, USB key, etc.), or via a communication link (e.g., over a communication network through one of communication interfaces 530).

Method 600 includes a step 605 of transmitting, by configuration device 30, configuration data to computing device 500.

Method 600 includes a step 610 of receiving, by computing device 500, configuration data. The configuration data is received via one of the communication interfaces 530 of the computing device 500.

Method 600 includes a step 615 of storing configuration data in configuration file 521. The configuration file 521 is stored in the memory 520 of the computing device 500. The configuration data includes an IPv6 basic prefix and an architecture identifier, which will be described in further detail in the following steps of method 600.

The details of the configuration device 30 generating and transmitting the configuration data are outside the scope of this disclosure. A single configuration device 30 is used at the data center level to transmit configuration data to the computing devices 500 in each architecture. Instead, a dedicated configuration device 30 is used for each architecture of the data center. Those skilled in the design of data center architectures will readily recognize appropriate networking protocols and configuration mechanisms for distributing configuration data from centralized configuration device 30 to the multiple computing devices 500 of the architecture.

Steps 605 and 610 are performed at the initiation of the configuration device 30 (push of configuration data) or at the initiation of the computing device 500 (pull of configuration data). In the case of a pull, for simplicity purposes, the additional steps of including a request for configuration data from computing device 500 to configuration device 30 are not represented in FIG. 6.

Steps 605 and 610 may occur when computing device 500 is initially deployed in an architecture. In this case, the computing device 500 is not connected at all (or is only partially connected) to any networking infrastructure of the data center. Thus, configuration data is transferred directly from configuration device 30 to computing device 500 using the basic bootstrap (basicboot) protocol. For example, a communication interface of configuration device 30 is physically connected (e.g., via an ethernet cable) to a communication interface of computing device 500, and the boot protocol operates over the temporary physical connection to perform the transfer of configuration data.

Method 600 includes a step 620 of determining a host identifier and optionally storing the host identifier in configuration file 521. Step 620 is performed by processing unit 510 of computing device 500.

Storing the host identifier in configuration file 521 is optional. Alternatively, the host identifier is only used at other steps of method 600, and the host identifier need not be stored in a configuration file. However, it may be more efficient to determine the host identifier only once (e.g., if the determination implies a calculation) and store it in configuration file 521, so that the host identifier can be used whenever needed without having to calculate it again.

Thus, determining the host identifier includes one of: selecting a host identifier, calculating a host identifier, and reading a host identifier from configuration file 521.

In a first embodiment, the host identifier is a selected 48-bit integer in hexadecimal format. For example, the host identifier is a Media Access Control (MAC) address. If at least one of the communication interfaces 530 of computing device 500 has a MAC address, processing unit 510 selects the MAC address of one of the communication interfaces 530 as the host identifier. Since the IPv6 address generated by method 600 is not associated with a particular communication interface 530 among all communication interfaces 530 of computing device 500, any one of the MAC addresses assigned to computing device 500 may be selected as the host identifier. For example, the selected MAC address is the MAC address of the communication interface 530 used to receive the configuration data at step 610. Equipment such as switches (backbone 100 and/or leaves 200) typically has a dedicated management interface for performing step 610. The MAC address of the management interface is used for the host identifier.

In a second embodiment, the host identifier is determined by computing a hash of a 128-bit Universally Unique Identifier (UUID) of the computing device 500. For example, a hash of a 128-bit UUID is also a 48-bit integer in hexadecimal format. UUIDs are well known in the art. The UUID is computed for a given computing device based on various methods (e.g., randomly, using a combination of MAC addresses and timestamps, etc.). The chance that the UUID of a given computing device is the same as the UUID of another device is very low.

The timing at which step 620 is performed may vary. For example, step 620 is performed before step 610 or after step 625.

The host identifier is not necessarily based on a MAC address or UUID. Instead, it is calculated based on other seed data, as long as it is unique (or at least has a very high probability of uniqueness).

Method 600 includes a step 625 of generating an IPv6 prefix by combining the IPv6 base prefix stored in configuration file 521 and the architecture identifier stored in configuration file 521. As previously described, the IPv6 basic prefix and architecture identifier are included in the configuration data received at step 610.

The IPv6 address consists of 128 bits, with the first n bits consisting of the subnet prefix. In IPv6 networking, it is common practice to reserve the first 64 bits of the IPv6 address (/64 prefix) for subnet prefixes.

The IPv6 prefix generated at step 625 has a length of N (typically 64) bits. The IPv6 base prefix stored in configuration file 521 is an IPv6 prefix having a length of B bits (e.g., 48) less than N. The architecture identifier stored in configuration file 521 is I bits in length. For example, the fabric identifier is a 16-bit integer. Each architecture in the data center (e.g., architecture a, architecture B, and architecture C in fig. 1) has a unique architecture identifier that is different from identifiers of other architectures in the data center.

The following relationship applies: b + I < ═ N.

In an exemplary embodiment, an IPv6 prefix is generated as follows: the architecture identifier is attached directly to the IPv6 base prefix. For example, the IPv6 basic prefix is FD10:0:0/48, the architecture identifier is a 16-bit integer < fabric _ id >, and the generated IPv6 prefix is FD10:0:0: < fabric _ id:/64.

In another exemplary embodiment, the IPv6 prefix is generated as follows. The IPv6 prefix begins with an IPv6 base prefix followed by zero and ends with an architecture identifier. For example, the IPv6 basic prefix is FD 10:/16, the architecture identifier is a 16-bit integer < fabric _ id >, and the generated IPv6 prefix is FD10:0:0: < fabric _ id:/64. In this case, the optional zero consists of bits 17 through 48 of the IPv6 prefix.

In yet another exemplary embodiment, the IPv6 prefix is generated as follows. The IPv6 prefix begins with an IPv6 base prefix followed by an architecture identifier and ends with zero. For example, the IPv6 basic prefix is FD 10:/16, the architecture identifier is a 16-bit integer < fabric _ id >, and the generated IPv6 prefix is FD10: < fabric _ id >:0: 0:/64. In this case, the optional zero consists of bits 33 through 64 of the IPv6 prefix.

Those skilled in the design of data center architectures will readily appreciate that other combinations of IPv6 base prefixes and architecture identifiers may be used to generate IPv6 prefixes.

In an exemplary embodiment of the method 600, the generated IPv6 prefix is a Unique Local Address (ULA) IPv6 prefix or a public IPv6 prefix.

Method 600 includes a step 630 of generating an IPv6 address by combining an IPv6 prefix (generated at step 625) and a host identifier (determined at step 620). This operation is well known in the IPv6 networking arts. For example, if the IPv6 prefix is a legacy/64 prefix, the last 64 bits of the IPv6 address are generated with the host identifier. If the host identifier is less than 64 bits, a zero is appended before (or after) the host identifier to achieve 64 bits.

For example, the IPv6 base prefix is a 48-bit prefix < base _ prefix > (e.g., FD10:0:0/48), the fabric identifier is a 16-bit integer < fabric _ id >, and the host identifier is a 48-bit integer < host _ id >. The generated IPv6 address is: < base _ prefix > < fabric _ id > < host _ id > < 0.

Method 600 includes a step 635 of advertising the IPv6 address generated at step 630. This operation is also well known in the IPv6 networking arts and relies on various layer 2 and/or layer 3 communication protocols. The advertisement is made on one or more communication interfaces 530 of computing device 500. For example, in the case of a backbone switch 100, the IPv6 address is advertised on all communication ports of the backbone switch 100, only on dedicated management ports of the backbone switch 100, only on ports of the backbone switch 100 that are connected to leaf switches 200, and so on. Similarly, in the case of leaf switch 200, IPv6 addresses are advertised on all communication ports of leaf switch 200, only on dedicated management ports of leaf switch 200, only on ports of leaf switch 200 that are connected to backbone switch 100 or controller 400, and so on.

Once step 630 is complete, computing device 500 is able to transmit data to other computing devices over IPv6 network 20. The IPv6 address generated at step 630 is used as the source IPv6 address for IPv6 packets transmitted to other computing devices. For example, the controller 400 transmits data to the leaf switch 200 or the backbone switch 100 through the IPv6 network 20.

Once step 635 is complete, computing device 500 can receive data over IPv6 network 20 from other computing devices that have received the advertised IPv6 address. The IPv6 address advertised at step 635 is used as the destination IPv6 address for IPv6 packets received from other computing devices. For example, the controller 400 receives data from the leaf switch 200 or the backbone switch 100 through the IPv6 network 20.

Steps 625, 630, and 635 of method 600 may be repeated several times based on the information stored in configuration file 521. In contrast, steps 605 to 620 need only be performed once for generating and storing the data required by steps 625 and 630. For example, steps 625, 630, and 635 are repeated at each boot-up of computing device 500; and steps 605 to 620 are only performed on the first boot of computing device 500 (as long as computing device 500 remains located in the same architecture).

Further, the configuration file may include several IPv6 basic prefixes. Steps 625, 630 and 635 of method 600 are repeated to configure (at step 630) the number of IPv6 addresses based on one IPv6 base prefix from among the number of IPv6 base prefixes, respectively. This enables computing device 500 to become part of several IPv6 networks.

Although the present disclosure has been described above by way of non-limiting illustrative embodiments, these embodiments can be freely modified within the scope of the appended claims without departing from the spirit and nature of the disclosure.

17页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:用于在数据中心中执行架构部署的计算设备和方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!