Fabric switching graphics module within a storage enclosure

文档序号:1409655 发布日期:2020-03-06 浏览:8次 中文

阅读说明:本技术 存储外壳内的结构交换图形模块 (Fabric switching graphics module within a storage enclosure ) 是由 J·布雷克斯通 C·R·隆 G·卡扎科夫 J·S·卡纳塔 于 2018-04-23 设计创作,主要内容包括:本文提供了分解式计算架构、平台和系统。在一个实施例中,提供了一种数据系统。所述数据系统包括第一组件,所述第一组件包括多个模块化存储托架,所述多个模块化存储托架装有一个或多个图形处理模块,每个图形处理模块包括图形处理单元(GPU),其中所述多个模块化存储托架各自包括托架连接器,所述托架连接器包括托架外围部件高速互连(PCIe)连接件。所述第一组件还包括PCIe交换机电路系统,所述PCIe交换机电路系统被配置为通过一个或多个外部PCIe链路将所述托架PCIe连接件通信地耦合到PCIe结构。(Decomposed computing architectures, platforms, and systems are provided herein. In one embodiment, a data system is provided. The data system includes a first component comprising a plurality of modular storage trays housing one or more graphics processing modules, each graphics processing module comprising a Graphics Processing Unit (GPU), wherein the plurality of modular storage trays each comprise a tray connector comprising a tray peripheral component interconnect express (PCIe) connection. The first component also includes PCIe switch circuitry configured to communicatively couple the carrier PCIe connector to a PCIe fabric through one or more external PCIe links.)

1. A data system, comprising:

a first component comprising a plurality of modular storage trays housing one or more graphics processing modules, each graphics processing module comprising a Graphics Processing Unit (GPU), wherein the plurality of modular storage trays each comprise a tray connector comprising a tray peripheral component interconnect express (PCIe) connection; and is

The first component also includes PCIe switch circuitry configured to communicatively couple the carrier PCIe connector to a PCIe fabric through one or more external PCIe links.

2. The data system of claim 1, comprising:

a second component configured to provide the PCIe fabric and comprising one or more compute modules coupled through the PCIe fabric and configured to communicate with one or more GPUs housed in the first component.

3. The data system of claim 2, comprising:

the second component includes at least one PCIe switch circuit configured to provide isolation between the carriage PCIe connectors and to dynamically establish a Direct Memory Access (DMA) arrangement between a Central Processing Unit (CPU) included in the compute module and GPUs included in one or more graphics modules.

4. The data system of claim 3, wherein the one or more graphics modules each include an associated PCIe root complex.

5. The data system of claim 3, wherein the one or more graphics modules each comprise an associated PCIe endpoint.

6. The data system of claim 1, wherein the one or more graphics processing modules each include a carrier insertable into a single one of the modular storage trays and coupled to an associated tray connector.

7. The data system of claim 1, wherein the one or more graphics processing modules each comprise a carrier spanning more than one of the modular storage trays and coupled to more than one tray connector.

8. The data system of claim 7, wherein a carrier spanning more than one of the modular storage carriers provides the combined power of the more than one carrier connector to the carrier, and wherein a carrier spanning more than one of the modular storage carriers provides more than one carrier PCIe connector connected to the carrier to couple a number of PCIe lanes to the carrier beyond a single carrier PCIe connector.

9. The data system of claim 1, comprising:

the first component further comprises a backplane component comprising the bracket connectors of the plurality of modular storage brackets to communicatively couple a backside of the one or more graphics processing modules to the PCIe fabric; and

a peer-to-peer connector on a front side of the one or more graphics processing modules to couple at least two of the one or more graphics processing modules over a point-to-point communication link separate from the PCIe fabric.

10. A computing system, comprising:

a just a bunch of disks (JBOD) chassis comprising a plurality of storage drive bays housing one or more graphics processing modules, each graphics processing module comprising at least one Graphics Processing Unit (GPU), wherein the plurality of storage drive bays each comprise at least one storage drive connector comprising a bay peripheral component interconnect express (PCIe) connection; and is

The JBOD chassis also includes PCIe switch circuitry configured to communicatively couple the carrier PCIe connector to a PCIe fabric through one or more external PCIe links.

11. The computing system of claim 10, comprising:

a computer chassis comprising one or more compute modules coupled through the PCIe fabric and configured to communicate with one or more GPUs housed in the JBOD chassis over the one or more external PCIe links.

12. The computing system of claim 11, comprising:

the computer chassis includes at least one PCIe switch circuit configured to provide isolation between the bay PCIe connectors and to dynamically establish a Direct Memory Access (DMA) arrangement between a Central Processing Unit (CPU) included in the compute module and GPUs included in one or more graphics modules.

13. The computing system of claim 12, wherein the one or more graphics modules each comprise an associated PCIe root complex.

14. The computing system of claim 12, wherein the one or more graphics modules each comprise an associated PCIe endpoint.

15. The computing system of claim 10, wherein the one or more graphics processing modules each comprise a carrier insertable into a single one of the storage drive bays and coupled to an associated storage drive connector.

16. The computing system of claim 10, wherein the one or more graphics processing modules each comprise a carrier spanning more than one of the storage drive bays and are coupled to more than one storage drive connector.

17. The computing system of claim 16, wherein a carrier spanning more than one of the storage drive bays provides combined power of the more than one storage drive connector to the carrier, and wherein a carrier spanning more than one of the storage drive bays provides more than one bay PCIe connector connected to the carrier to couple a number of PCIe lanes to the carrier beyond a single bay PCIe connector.

18. The computing system of claim 10, comprising:

the JBOD chassis further comprising a backplane assembly including the storage drive connectors of the plurality of storage drive bays to communicatively couple a backside of the one or more graphics processing modules to the PCIe fabric; and

a peer-to-peer connector on a front side of the one or more graphics processing modules to couple at least two of the one or more graphics processing modules over a point-to-point communication link separate from the PCIe fabric.

19. A data processing system comprising:

a first enclosure comprising one or more computing modules coupled by a peripheral component interconnect express (PCIe) fabric; and

a second enclosure comprising at least one PCIe switch circuit and a plurality of graphics processing modules, each graphics processing module comprising a Graphics Processing Unit (GPU), wherein the PCIe switch circuit is configured to communicatively couple the plurality of graphics processing modules to a PCIe fabric of the first enclosure over at least one external PCIe link communicatively coupling the first enclosure to the second enclosure.

20. The data processing system of claim 19, wherein the second enclosure comprises a just a bunch of disks (JBOD) chassis comprising a plurality of storage drive bays housing one or more of the graphics processing modules, wherein the plurality of storage drive bays each comprise at least one U.2 storage drive connector, the U.2 storage drive connector comprising a bay PCIe connector.

Background

Computer systems typically include a data storage system and various processing systems, which may include a Central Processing Unit (CPU) and a Graphics Processing Unit (GPU). As data processing requirements and data storage requirements increase in these computer systems, networked storage systems have been introduced that process large amounts of data in computing environments that are physically separate from the end-user computer devices. These networked storage systems typically provide end-users or other external systems with access to mass data storage and data processing through one or more network interfaces. These networked storage systems and remote computing systems may be included in a high-density facility, such as a rack-mounted environment.

However, as the density of networked storage systems and remote computing systems increases, various physical limits may be reached. These limits include density limits based on the underlying storage technology, such as in embodiments of large arrays of rotating magnetic media storage systems. These limits may also include computational or data processing density limits based on various physical space requirements of the data processing equipment and network interconnections, as well as the large space requirements of the environmental climate control system. In addition to physical space limitations, these data systems have traditionally been limited in the number of devices that each host can include, which can be problematic in environments where higher capacity, redundancy, and reliability are desired. These drawbacks may be particularly acute with the ever-increasing data storage and processing needs in networked, cloud, and enterprise environments.

Disclosure of Invention

Decomposed computing architectures, platforms, and systems are provided herein. In one embodiment, a data system is provided. The data system includes a first Component including a plurality of modular storage trays (bay) housing one or more graphics processing modules, each graphics processing module including a Graphics Processing Unit (GPU), wherein the plurality of modular storage trays each include a tray connector including a tray Peripheral Component Interconnect Express (PCIe) connection. The first component also includes PCIe switch circuitry configured to communicatively couple the carrier PCIe connector to a PCIe fabric (fabric) through one or more external PCIe links.

This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. It can be appreciated that this summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.

Drawings

Many aspects of this disclosure can be better understood with reference to the following drawings. The components in the figures are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Moreover, in the drawings, like reference numerals designate corresponding parts throughout the several views. While several embodiments are described in connection with these drawings, the present disclosure is not limited to the embodiments disclosed herein. On the contrary, the intent is to cover all alternatives, modifications and equivalents.

FIG. 1 illustrates a computing platform in one embodiment.

FIG. 2 illustrates management of a computing platform in one embodiment.

FIG. 3 illustrates a management processor in one embodiment.

FIG. 4 illustrates the operation of the computing platform in one embodiment.

FIG. 5 illustrates components of a computing platform in one embodiment.

FIG. 6A illustrates components of a computing platform in one embodiment.

FIG. 6B illustrates components of a computing platform in one embodiment.

FIG. 7 illustrates components of a computing platform in one embodiment.

FIG. 8 illustrates components of a computing platform in one embodiment.

FIG. 9 illustrates components of a computing platform in one embodiment.

FIG. 10 illustrates components of a computing platform in one embodiment.

FIG. 11 illustrates components of a computing platform in one embodiment.

FIG. 12 illustrates components of a computing platform in one embodiment.

FIG. 13 illustrates the operation of the computing platform in one embodiment.

FIG. 14 illustrates components of a computing platform in one embodiment.

Detailed Description

FIG. 1 is a system diagram illustrating a computing platform 100. Computing platform 100 includes one or more management processors 110 and a plurality of physical computing components. The physical compute components include a processor 120, a storage element 130, a network element 140, a peripheral component interconnect express (PCIe) switch element 150, and a Graphics Processing Unit (GPU) 170. These physical compute components are communicatively coupled through a PCIe fabric 151 formed by PCIe switch element 150 and various corresponding PCIe links. PCIe fabric 151 is configured to communicatively couple multiple physical compute components and build compute blocks using logical partitions within the PCIe fabric. These computing blocks, referred to in fig. 1 as machines 160, may each be comprised of any number of processors 120, memory units 130, network interface 140 modules, and GPUs 170, including zero arbitrary modules.

The components of the platform 100 may be included in one or more physical enclosures (enclosures), such as rack-mountable units that may be further included in shelf (shelving) units or rack (rack) units. A predetermined number of components of platform 100 may be inserted or installed into a physical enclosure, such as a modular frame in which modules may be inserted and removed according to the needs of a particular end user. A closed modular system, such as platform 100, may include a physical support structure and an enclosure containing circuitry, printed circuit boards, semiconductor systems, and structural elements. The modules comprising the components of platform 100 are insertable and removable from the rack-mounted enclosure. In some embodiments, the elements of fig. 1 are included in a chassis (chassis) (e.g., 1U, 2U, or 3U) for installation in a larger rack-mount environment. It should be understood that the elements of fig. 1 may be included in any physical mounting environment and need not include any associated housing or rack-mounted elements.

In addition to the components described above, an external shell may be employed that includes multiple graphics modules, graphics cards, or other graphics processing elements including a GPU portion. In fig. 1, a just a box of disks (JBOD) enclosure is shown that includes PCIe switch circuitry that couples any number of the included devices (such as GPU 191) to another enclosure that includes the computing, storage, and network elements discussed above over one or more PCIe links. The enclosure may not include a JBOD enclosure, but typically includes modular components in which individual graphics modules may be inserted and removed into associated slots or trays (bay). In a JBOD embodiment, a disk drive or storage device is typically inserted to create the storage system. However, in embodiments herein, a graphics module is inserted instead of a storage drive or storage module, which advantageously provides coupling of a large number of GPUs to perform data processing/graphics processing within a similar physical shell space. In one embodiment, a JBOD shell may include 24 slots for storage/driver modules that instead house one or more GPUs carried on the graphics module. The external PCIe link coupling the enclosures may include any of the external PCIe link physical and logical embodiments discussed herein.

Once the components of platform 100 have been inserted into one or more enclosures, the components may be coupled through a PCIe fabric and logically isolated into any number of separate "machine" or computing blocks. The PCIe fabric may be configured by the management processor 110 to selectively route traffic (traffic) between components of a particular compute module and with external systems while maintaining logical isolation between components not included in a particular compute module. In this manner, a flexible "bare metal" configuration may be established between components of platform 100. Each computing block may be associated with an external user or client machine that may utilize the computing resources, storage resources, network resources, or graphics processing resources of the computing block. Furthermore, for greater parallelism and capacity, any number of compute blocks may be grouped into a "cluster" of compute blocks. Although not shown in fig. 1 for clarity, various power supply modules and associated power and control distribution links may also be included.

Turning now to the components of platform 100, management processor 110 may include one or more microprocessors and other processing circuitry that retrieves and executes software from an associated storage system, such as user interface 112 and management operating system 111. Processor 110 may be implemented within a single processing device, but may also be distributed across multiple processing devices or subsystems that cooperate in executing program instructions. Embodiments of processor 110 include a general purpose central processing unit, a special purpose processor and a logic device, as well as any other type of processing device, combinations, or variations thereof. In some embodiments, processor 110 comprises an Intel microprocessor or AMD microprocessor, ARM microprocessor, FPGA, ASIC, application specific processor, or other microprocessor or processing element.

In fig. 1, the processor 110 provides an interface 113. Interface 113 includes a communication link between processor 110 and any components coupled to PCIe fabric 151. This interface uses ethernet traffic transmitted over the PCIe link. In addition, each processor 120 in fig. 1 is configured with a driver 141, which driver 141 guarantees ethernet communication over a PCIe link. Thus, any of processor 110 and processor 120 may communicate over an ethernet network that is transmitted over a PCIe fabric. Further discussion of this ethernet over PCIe configuration is discussed below.

A plurality of processors 120 are included in the platform 100. Each processor 120 includes one or more microprocessors and other processing circuitry that retrieves and executes software from an associated memory system, such as drivers 141 and any number of end-user applications. Each processor 120 may be implemented within a single processing device, but may also be distributed across multiple processing devices or subsystems that cooperate in executing program instructions. Embodiments of each processor 120 include a general purpose central processing unit, a special purpose processor and a logic device, as well as any other type of processing device, combinations, or variations thereof. In some embodiments, each processor 120 includes an Intel microprocessor or AMD microprocessor, an ARM microprocessor, a graphics processor, a computing core, a graphics core, an Application Specific Integrated Circuit (ASIC), or other microprocessor or processing element. Each processor 120 may also communicate with other computing units (such as computing units within the same storage component/enclosure or another storage component/enclosure) through one or more PCIe interfaces and PCIe fabric 151.

A plurality of storage units 130 are included in the platform 100. Each storage unit 130 includes one or more storage drives, such as a solid state drive in some embodiments. Each memory unit 130 also includes a PCIe interface, a control processor, and power system elements. Each memory unit 130 also includes an on-sled (on-sled) processor or control system for traffic statistics and condition monitoring, as well as other operations. Each storage unit 130 includes one or more solid state memory devices having a PCIe interface. In still other embodiments, each storage unit 130 includes one or more individual Solid State Drives (SSDs) or magnetic Hard Disk Drives (HDDs) and associated enclosures and circuitry.

A plurality of Graphics Processing Units (GPUs) 170 are included in the platform 100. Each GPU includes graphics processing resources that may be allocated to one or more compute units. A GPU may include a graphics processor, shader, pixel rendering element, frame buffer, texture mapper, graphics core, graphics pipeline, graphics memory, or other graphics processing and processing element. In some embodiments, each GPU170 includes a graphics "card" that includes circuitry to support GPU chips. Example GPU cards include nVidia Jetson or Tesla cards that include graphics processing and computing elements, as well as various supporting circuitry, connectors, and other elements. Some example GPU modules also include a CPU or other processor to facilitate the functionality of the GPU components as well as the PCIe interface and related circuitry. GPU elements 191 may also include the elements discussed above with respect to GPU170, and also include a physical module or carrier (carrier) that may be inserted into a slot of a cradle of an associated JBOD or other enclosure.

Network interface 140 comprises a network interface card for communicating over a TCP/IP (transmission control protocol (TCP)/internet protocol) network or for carrying user traffic such as iSCSI (internet small computer system interface) or nvme (nvm express) traffic for storage unit 130 or other TCP/IP traffic for processor 120. The network interface 140 may include ethernet interface equipment and may communicate over a wired link, an optical link, or a wireless link. External access to the components of platform 100 is provided through a packet network link provided by network interface 140. Network interface 140 communicates with other components of platform 100, such as processor 120 and storage unit 130, over associated PCIe links and PCIe fabric 151. In some embodiments, a network interface is provided for inter-system network communications to communicate over ethernet for exchanging communications between any of the processors 120 and the processor 110.

Each PCIe switch 150 communicates over an associated PCIe link. In the embodiment in fig. 1, PCIe switch 150 may be used to carry user data between network interface 140, storage unit 130, and processing unit 120. Each PCIe switch 150 comprises a PCIe cross connect switch for establishing switch connections between any PCIe interfaces handled by each PCIe switch 150. In some embodiments, PCIe switches in PCIe switches 150 include PLX/Broadcom/Avago PEX 879624-port, 96-lane PCIe switch chip, PEX 872510-port, 24-lane PCIe switch chip, PEX97xx chip, PEX9797 chip, or other PEX87xx/PEX97xx chips.

The PCIe switches discussed herein may include PCIe crosspoint switches that logically interconnect various ones of the associated PCIe links based at least on traffic carried by each PCIe link. In these embodiments, a domain-based PCIe signaling distribution may be included that allows for the segregation of PCIe ports of PCIe switches according to user-defined groups. The user-defined groups may be managed by a processor 110, the processor 110 logically integrating components into associated computing units 160 of one particular cluster and logically isolating components and computing units between different clusters. In addition to or as an alternative to the domain-based partitioning, each PCIe switch port may be a non-transparent (NT) port or a transparent port. NT ports may allow some logical isolation between endpoints, much like bridges, while transparent ports do not allow logical isolation and have the effect of connecting endpoints in a pure switch configuration. Access through one or more NT ports may include additional handshaking between the PCIe switch and the initiating endpoint to select a particular NT port or to allow visibility through the NT port.

PCIe may support multiple bus widths, such as x1, x4, x8, x16, and x32, where each bus width includes additional "lanes" for data transfers. PCIe also supports the transfer of sideband signaling, such as a system management bus (SMBus) interface and Joint Test Action Group (JTAG) interface, and associated clock, power and boot (bootstrapping) and other signaling. Although PCIe is used in fig. 1, it is understood that different communication links or buses may be employed instead, such as NVMe, ethernet, serial attached scsi (sas), fibre channel (fiberchannel), Thunderbolt interface (Thunderbolt), serial attached high speed ata (sata express), and other high speed serial near range interfaces, various network and link interfaces. Any of the links in fig. 1 may each use various communication media such as air, space, metal, optical fiber, or some other signal propagation path, including combinations thereof. Any of the links in FIG. 1 may include any number of PCIe links or lane configurations. Any of the links in fig. 1 may each be a direct link, or may include various equipment, intermediate components, systems, and networks. Any of the links in fig. 1 may each be a common link, a shared link, an aggregated link, or may be comprised of discrete, individual links.

In fig. 1, any compute module 120 has configurable logical visibility to any/all memory units 130 or GPUs 170/191 as logically separated by PCIe fabric. Any computing module 120 may transfer data for storage on any storage unit 130 and may retrieve data stored on any storage unit 130. Thus, "m" storage drives may be coupled with "n" processors to allow for a large, scalable architecture with a high level of redundancy and density. Further, any compute module 120 may transfer data for processing by any GPU 170/191 or may hand over control of any GPU to another compute module 120.

To provide visibility of each compute module 120 to any memory unit 130 or GPU 170/191, various techniques may be employed. In a first embodiment, management processor 110 establishes a cluster that includes one or more computing units 160. These computing units include one or more processor 120 elements, zero or more memory units 130, zero or more network interface units 140, and zero or more graphics processing units 170/191. The components of these compute units are communicatively coupled to an external enclosure, such as JBOD 190, through portions of PCIe fabric 151 and any associated external PCIe interfaces. Once computing units 160 have been assigned to a particular cluster, other resources (such as storage resources, graphics processing resources, and network interface resources, among others) may be assigned to the cluster. Management processor 110 may instantiate/bind a subset number of the total amount of storage resources of platform 100 to a particular cluster and for use by one or more computing units 160 of that cluster. For example, 16 storage drives spanning 4 storage units may be assigned to a set of two compute units 160 in a cluster. The compute units 160 assigned to a cluster then process transactions, such as read transactions and write transactions, for the subset of storage units.

Each computing unit 160, and in particular the processor of the computing unit, may have visibility to the memory map of storage units or graphics units within the cluster or visibility based on routing tables, while other units not associated with a cluster are generally inaccessible to the computing unit until logical visibility is granted. Furthermore, each computing unit may manage only a subset of the storage units or graphics units for an associated cluster. However, it is possible to receive a storage operation or a graphics processing operation managed by the second computing unit through a network interface associated with the first computing unit. When a resource unit that is not managed by the first compute unit (i.e., managed by the second compute unit) desires a storage operation or a graphics processing operation, the first compute unit uses memory mapped access or visibility based on the routing table to direct the operation to the appropriate resource unit of the transaction through the second compute unit. The transaction may be transferred and transferred to the appropriate computing unit that manages that resource unit associated with the data of the transaction. For storage operations, PCIe fabric is used to transfer data between compute units/processors of a cluster so that a particular compute unit/processor can store the data in a storage unit or storage drive managed by the particular compute unit/processor, even though the data may be received through a network interface associated with a different compute unit/processor. For graphics processing operations, PCIe fabric is used to transfer graphics data and graphics processing commands between compute units/processors of a cluster so that a particular compute unit/processor may control one or more GPUs managed by the particular compute unit/processor, even though the data may be received through network interfaces associated with a different compute unit/processor. Thus, while each particular compute unit of a cluster actually manages a subset of all resource units (such as storage drives in a storage unit or graphics processors in a graphics unit), all compute units of a cluster have visibility into any of the cluster's resource units and can initiate transactions to any of the cluster's resource units. The managing compute unit that manages a particular resource unit may receive the retransmitted transaction and any associated data from the initiating compute unit by using at least a memory mapped address space or routing table to establish which processing module handles the memory operation for a particular set of memory units.

In a graphics processing embodiment, the NT partitions or domain-based partitions in the switched PCIe fabric may be provided by one or more of the PCIe switches having NT ports or domain-based features. This partitioning may ensure that the GPU can interwork with a desired compute unit, and that more than one GPU, such as more than eight (8) GPUs, can be associated with a particular compute unit. Further, the dynamic GPU-compute unit relationship may be adjusted on-the-fly using partitions on the PCIe fabric. Shared network resources may also be applied to graphics processing elements across compute units. For example, when a first compute processor determines that the first compute processor is not physically managing a graphics unit associated with a received graphics operation, then the first compute processor transmits the graphics operation over the PCIe fabric to another compute processor of the cluster that does manage the graphics unit.

In other embodiments, memory mapped Direct Memory Access (DMA) pipes may be formed between individual CPU/GPU pairs. This memory mapping may occur over the PCIe fabric address space, among other configurations. To provide these DMA pipes over a shared PCIe fabric that includes many CPUs and GPUs, the logical partitioning described herein may be employed. In particular, NT ports or domain-based partitions on PCIe switches may isolate individual DMA pipes between associated CPU/GPUs.

In a storage operation (such as a write operation), data may be received by a particular processor of a particular cluster through the cluster's network interface 140. Load balancing or other factors may allow any network interface of the cluster to receive storage operations for any of the processors of the cluster and any of the storage units for the cluster. For example, the write may be a write received from an end user employing iSCSI protocol or NVMe protocol through the first network interface 140 of the first cluster. A first processor of the cluster may receive the write operation and determine whether the first processor manages one or more storage drives associated with the write operation, and if the first processor manages the one or more storage drives associated with the write operation, the first processor transmits data over a PCIe fabric for storage on the associated storage drives of the storage unit. Each PCIe switch 150 of the PCIe fabric may be configured to route PCIe traffic associated with the cluster between various storage, processor, and network elements of the cluster, such as using domain-based routing or NT ports. If the first processor determines that the first processor does not physically manage the one or more storage drives associated with the write operation, the first processor transmits the write operation over a PCIe fabric to another processor of the cluster that does manage the one or more storage drives. Data striping (data striping) may be employed by any processor to stripe data of a particular write transaction across any number of storage drives or storage units, such as across one or more of the storage units of a cluster.

In this embodiment, PCIe fabric 151 associated with platform 100 has a 64-bit address space, which allows for 264Byte addressable space, resulting in at least 16 Aibytes (exbibyte) of byte addressable memory. The 64-bit PCIe address space may be shared by all compute units or partitioned among the various compute units forming the cluster to achieve the appropriate memory mapping to resource units. Each PCIe switch 150 of the PCIe fabric may be configured to separate and route PCIe traffic associated with a particular cluster among the various storage, computing, graphics processing, and network elements of the cluster. This separation and routing may be established using domain-based routing or NT ports to establish cross-point connections between the various PCIe switches of the PCIe fabric. Redundancy and failover (failover) paths may also be established so that when one or more of the PCIe switches fail or become unresponsive, the cluster's traffic may still be routed between the cluster's elements. In some embodiments, a mesh (mesh) configuration is formed by PCIe switches of a PCIe fabric to ensure redundant routing of PCIe traffic.

The management processor 110 controls the operation of the PCIe switches 150 and the PCIe fabric 151 through one or more interfaces, which may include an inter-integrated circuit (I2C) interface communicatively coupling each PCIe switch of the PCIe fabric. The management processor 110 may use the PCIe switch 150 to establish NT-based or domain-based separations between PCIe address spaces. Each PCIe switch may be configured to separate portions of the PCIe address space to establish cluster-specific partitions. Various configuration settings for each PCIe switch may be altered by the management processor 110 to establish domain and cluster separation. In some embodiments, management processor 110 may include a PCIe interface and communicate/configure the PCIe switch through a sideband interface or PCIe interface that is transmitted within PCIe protocol signaling.

A management Operating System (OS)111 is executed by the management processor 110 and provides management of the resources of the platform 100. The management includes creating, modifying, and monitoring one or more clusters that include one or more computing units. The management OS 111 provides the functions and operations described herein for the management processor 110. The management processor 110 also includes a user interface 112, which user interface 112 may present a Graphical User Interface (GUI) to one or more users. An end user or administrator may employ the user interface 112 and GUI to establish clusters, assigning assets (computing units/machines) to each cluster. The user interface 112 may provide other user interfaces in addition to a GUI, such as a command line interface, an Application Programming Interface (API), or other interface. In some embodiments, the GUI is provided through a web interface (websocket) based interface.

More than one management processor may be included in the system, such as when each management processor may manage resources for a predetermined number of clusters or computing units. User commands (such as user commands received through a GUI) may be received into any of the management processors of the system and may be forwarded by the receiving management processor to the processing management processor. Each management processor may have a unique or pre-assigned identifier that may assist in the delivery of user commands to the appropriate management processor. In addition, the management processors may communicate with each other, such as using mailbox processes (mailboxprocess) or other data exchange techniques. This communication may occur through a dedicated sideband interface, such as an I2C interface, or may occur through a PCIe or ethernet interface coupled to each management processor.

The management OS 111 also includes an emulated network interface 113. Emulated network interface 113 includes transport mechanisms for transporting network traffic over one or more PCIe interfaces. The emulated network interface 113 may emulate a network device (such as an ethernet device) to the management processor 110 such that the management processor 110 may interact/interface with any of the processors 120 via a PCIe interface as the processors communicate over the network interface. The emulated network interface 113 may include a kernel-level element or module that allows the management OS 111 to interface with using Ethernet commands and drivers. The emulated network interface 113 allows an application or OS-level process to communicate with the emulated network device without the associated delays and processing overhead associated with the network stack. Emulated network interface 113 includes drivers or modules, such as kernel-level modules, that appear as network devices to application-level and system-level software executed by the processor device, but do not require network stack processing. Instead, emulated network interface 113 communicates the associated traffic to another emulated network device over a PCIe interface or PCIe fabric. Advantageously, the emulated network interface 113 does not employ network stack processing, but still behaves as a network device, such that the software of the associated processor can interact with the emulated network device without modification.

Emulated network interface 113 translates PCIe traffic into network device traffic and vice versa. Processing communications passed to the network device through the network stack, which is typically employed for the type of network device/interface presented, is omitted. For example, the network device may appear to the operating system or application as an ethernet device. Communications received from the operating system or application will be transmitted by the network device to one or more destinations. However, the emulated network interface 113 does not include a network stack for handling downward communication from the application layer down to the link layer. Instead, the emulated network interface 113 extracts the payload data and the destination from communications received from the operating system or application and translates the payload data and destination into PCIe traffic, such as by encapsulating the payload data into PCIe frames using addressing associated with the destination.

A management driver 141 is included on each processor 120. The management driver 141 may include an emulated network interface, such as discussed with respect to the emulated network interface 113. In addition, the management driver 141 monitors the operation of the associated processor 120 and software executed by the processor 120 and provides telemetry (telemetric) of this operation to the management processor 110. Thus, any user-provided software, such as a user-provided operating system (Windows, Linux, MacOS, Android, iOS, etc. … …) or user applications and drivers may be executed by each processor 120. Management driver 141 provides functionality that allows each processor 120 to participate in the associated computing unit and/or cluster, as well as providing telemetry data to the associated management processor. Each processor 120 may also communicate with each other through emulated network devices that transport network traffic over the PCIe fabric. Driver 141 also provides APIs for user software and operating systems to interact with driver 141 and exchange control/telemetry signaling with management processor 110.

FIG. 2 is a system diagram including further details regarding elements from FIG. 1. System 200 includes a detailed view of an embodiment of processor 120 and management processor 110.

In fig. 2, processor 120 may be an exemplary processor in any computing unit or machine of the cluster. Detailed view 201 shows several layers of processor 120. The first layer 121 is the hardware layer or "metal" machine infrastructure of the processor 120. The second layer 122 provides the OS and the management driver 141 and API 125. Finally, a third layer 124 provides user-level applications. View 201 illustrates that a user application may access storage resources, computing resources, graphics processing resources, and communication resources of a cluster, such as when the user application comprises a cluster storage system or a cluster processing system.

As discussed above, driver 141 provides an emulated network device for communicating with management processor 110 (or other processor 120 elements) over a PCIe fabric. This is shown in fig. 2 as ethernet traffic over PCIe transport. However, no network stack is employed in driver 141 to transmit traffic over PCIe. Instead, the driver 141 appears as a network device to the operating system or as a kernel to each processor 120. User-level services/applications/software may interact with the emulated network device without modification to the ordinary or physical network device. However, traffic associated with the emulated network device is transmitted over a PCIe link or PCIe fabric, as shown. API 113 may provide a standardized interface for managing traffic, such as for control instructions, control responses, telemetry data, status information, or other data.

Fig. 3 is a block diagram illustrating a management processor 300. Management processor 300 illustrates one embodiment of any of the management processors discussed herein, such as processor 110 of fig. 1. Management processor 300 includes a communication interface 302, a user interface 303, and a processing system 310. Processing system 310 includes processing circuitry 311, Random Access Memory (RAM)312, and storage 313, but may include other elements.

Processing circuitry 311 may be implemented within a single processing device, but may also be distributed across multiple processing devices or subsystems that cooperate in executing program instructions. Embodiments of processing circuitry 311 include general purpose central processing units, microprocessors, special purpose processors, and logic devices, as well as any other type of processing device. In some embodiments, processing circuitry 311 comprises a physically distributed processing device, such as a cloud computing system.

The communication interface 302 includes one or more communication interfaces and network interfaces for communicating over a communication link, a network (such as a packet network, the internet, etc.). The communication interfaces may include a PCIe interface, an ethernet interface, a Serial Peripheral Interface (SPI) link, an integrated circuit bus (I2C) interface, a Universal Serial Bus (USB) interface, a UART interface, a wireless interface, or one or more local area network communication interfaces or wide area network communication interfaces that may communicate over an ethernet or Internet Protocol (IP) link. The communication interface 302 may include a network interface configured to communicate using one or more network addresses, which may be associated with different network links. Embodiments of communication interface 302 include network interface card equipment, transceivers, modems, and other communication circuitry.

The user interface 303 may include a touch screen, keyboard, mouse, voice input device, audio input device, or other touch input device for receiving input from a user. Output devices such as a display, speakers, web interface, terminal interface, and other types of output devices may also be included in user interface 303. The user interface 303 may provide output and receive input through a network interface, such as the communication interface 302. In network embodiments, the user interface 303 may group display data or graphical data for remote display by a display system or computing system coupled through one or more network interfaces. The physical or logical elements of the user interface 303 may provide an alert or visual output to a user or other operator. The user interface 303 may also include associated user interface software executable by the processing system 310 to support the various user input devices and output devices discussed above. The user interface software and user interface devices may support a graphical user interface, a natural user interface, or any other type of user interface, alone or in combination with each other and other hardware elements and software elements.

RAM 312 and storage 313 together may comprise a non-transitory data storage system, although variations are possible. The RAM 312 and the storage 313 may each include any storage medium readable by the processing circuitry 311 and capable of storing software. RAM 312 may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules or other data. Storage 313 may include non-volatile storage media such as solid state storage media, flash memory, phase change memory, or magnetic memory, including combinations thereof. The RAM 312 and the storage 313 may each be implemented as a single storage device, but may also be implemented over multiple storage devices or subsystems. The RAM 312 and the storage 313 may each include additional elements, such as a controller, capable of communicating with the processing circuitry 311.

Software stored on or in RAM 312 or storage 313 may include computer program instructions, firmware, or some other form of machine-readable processing instructions with processes that, when executed by a processing system, direct processor 300 to operate as described herein. For example, software 320 may drive processor 300 to receive user commands to establish a cluster comprising computing blocks among a plurality of physical computing components comprising computing modules, storage modules, and network modules. Software 320 may drive processor 300 to receive and monitor telemetry, statistics, operational, and other data to provide telemetry to a user and to alter operation of the cluster based on the telemetry or other data. Software 320 may drive processor 300 to manage cluster and compute unit resources/graphics unit resources, establish domain partitions or NT partitions between PCIe fabric elements, and interface with various PCIe switches, among other operations. The software may also include a user software application, an Application Programming Interface (API), or a user interface. The software may be implemented as a single application or as multiple applications. In general, software, when loaded into a processing system and executed, may transform the processing system from a general-purpose device into a special-purpose device customized as described herein.

System software 320 illustrates a detailed view of an example configuration of RAM 312. It should be understood that different configurations are possible. The system software 320 includes an application 321 and an Operating System (OS) 322. The software applications 323 and 326 each include executable instructions that are executable by the processor 300 to operate a cluster controller or other circuitry in accordance with the operations discussed herein.

In particular, cluster management application 323 establishes and maintains clusters and computing units among the various hardware elements of the computing platform, as seen in FIG. 1. The cluster management application 323 can also establish isolation functions to allow dynamic allocation of PCIe devices, such as GPUs, from a communication or logical connection through an associated PCIe fabric to PCIe devices, such as services (provision) or services (disprovision). User interface application 324 provides one or more graphical or other user interfaces for end users to manage and monitor the operation of the associated clusters and computing units. Inter-module communication application 325 provides communication between other processor 300 elements, such as through I2C, Ethernet, an emulated network device, or a PCIe interface. The user CPU interface 327 provides communications, APIs, and emulated networking devices for communicating with the processor of the computing unit and its dedicated driver elements. PCIe fabric interface 328 establishes various logical partitions or domains between PCIe switch elements, controls operation of PCIe switch elements, and receives telemetry from PCIe switch elements.

Software 320 may reside in RAM 312 during execution and operation of processor 300, and may reside in storage system 313 during a power-off state, among other locations and states. Software 320 may be loaded into RAM 312 during a boot or boot (boot) procedure, as described for computer operating systems and application programs. Software 320 may receive user input through user interface 303. This user input may include user commands as well as other inputs, including combinations thereof.

The memory system 313 may include flash memory such as NAND flash or NOR flash, phase change memory, resistive memory, magnetic memory, and other solid state storage technologies. As shown in fig. 3, storage system 313 includes software 320. As described above, during the power-down state of processor 300, software 320, along with other operating software, may be in non-volatile storage space for applications and the OS.

Processor 300 is generally intended to represent a computing system to deploy and execute at least software 320 to render or otherwise carry out the operations described herein. However, processor 300 may also represent any of the following computing systems: at least the software 320 may be hierarchical on the computing system, and the software 320 may be distributed from the computing system, delivered, downloaded, or otherwise provided to yet another computing system for deployment and execution, or yet another distribution.

Fig. 4 is a flow chart illustrating an embodiment of operations for any of the systems discussed herein, such as platform 100 of fig. 1, system 200 of fig. 2, or processor 300 of fig. 3. In fig. 4, operations will be discussed in the context of the elements of fig. 1 and 2, although the operations may also apply to elements of other figures herein.

The management processor 110 presents (401) a user interface to the cluster management service. This user interface may comprise a GUI or other user interface. The user interface allows a user to create a cluster (402) and allocate resources thereto. The cluster may be represented graphically according to which resources have been allocated, and may have an associated name or identifier specified by the user or predetermined by the system. The user may then build computing blocks (403) and assign these computing blocks to clusters. The computing block may have resource elements/units such as processing elements, graphics processing elements, storage elements, and network interface elements, among others.

Once the user has specified these various clusters and the computing blocks within the clusters, management processor 110 may implement (404) the instructions. This implementation may include allocating resources to particular clusters and compute blocks within an allocation table or data structure maintained by processor 110. The implementation may also include configuring a PCIe switch element of the PCIe fabric to logically divide the resources into routing domains of the PCIe fabric. The implementation may also include initializing the processor, storage driver, GPU, memory device, and network elements to bring these elements into an operational state and associate these elements with a particular cluster or compute unit. In addition, the initialization may include deploying user software to the processor, configuring the network interface with the associated address and network parameters, and establishing partitions or Logical Units (LUNs) among the storage elements. Once these resources have been allocated to the cluster/compute unit and initialized, they may then be made available to the user for execution of the user operating system, user applications, and for user storage processes, among other user purposes.

In addition, as will be discussed below in fig. 6-14, multiple GPUs may be allocated to a single host, and these allocations may be dynamically changed/altered. The management processor 110 may control the assignment of GPUs to various hosts and configure the attributes and operations of the PCIe fabric to achieve this dynamic assignment. Further, a peer-to-peer relationship may be established between the GPUs such that traffic exchanged between the GPUs need not be communicated through the associated host processor, thereby greatly increasing throughput and processing speed.

FIG. 4 illustrates successive operations, such as operations for a user to monitor or modify an existing cluster or computing unit. An iterative process may be performed where a user may monitor and modify elements and where these elements may be reassigned, aggregated into a cluster or disaggregated from a cluster.

In operation 411, the cluster is operated according to user-specified configurations, such as those discussed in fig. 4. The operations may include executing a user operating system, user applications, user stored procedures, graphical operations, and other user operations. During operation, processor 110 receives (412) telemetry from various cluster elements, such as PCIe switch elements, processing elements, storage elements, network interface elements, and other elements, including user software executed by computing elements. Telemetry data may be provided 413 to a user via a user interface, stored in one or more data structures, and used to prompt other users to instruct 402 or modify the operation of the cluster.

The systems and operations discussed herein provide for dynamic allocation of computing resources, graphics processing resources, network resources, or storage resources to a compute cluster. The computing units are broken down from any particular cluster or computing unit before being assigned by a user of the system. The management processor may control the operation of the cluster and provide a user interface to a cluster management service provided by software executed by the management processor. A cluster includes at least one "machine" or computing unit, and a computing unit includes at least one processor element. The computing unit may also include network interface elements, graphics processing elements, and memory elements, although these elements are not required by the computing unit.

Processing resources and other elements (graphics processing, networking, storage) may be called into and out of the computing units and associated clusters on the fly, and these resources may be allocated to other computing units or clusters. In one embodiment, a graphics processing resource may be scheduled/orchestrated by a first computing resource/CPU and then provide graphics processing state/results to another computing unit/CPU. In another embodiment, when a resource experiences a fault, hang, overload condition, then additional resources may be introduced to the compute units and clusters to supplement the resource.

The processing resource may have a unique identifier assigned thereto for identification by the management processor and for identification on the PCIe fabric. When a processing resource is initialized after it is added to a computing unit, user-provided software (such as an operating system and application programs) may be deployed to the processing resource as needed, and when the processing resource is removed from the computing unit, the user-provided software may be removed from the processing resource. The user software may be deployed from a storage system that the management processor may access for the deployment. Storage resources, such as storage drives, storage devices, and other storage resources, may be allocated and subdivided among the computing units/clusters. These storage resources may span different or similar storage drives or devices, and may have any number of Logical Units (LUNs), logical targets, partitions, or other logical arrangements. These logical arrangements may include one or more LUNs, iSCSI LUNs, NVMe targets, or other logical partitions. An array of storage resources, such as a mirrored, striped, Redundant Array of Independent Disks (RAID) array, may be employed, or other array configurations across storage resources may be employed. Network resources, such as network interface cards, may be shared among the computing units of the cluster using bridging or spanning techniques. Graphics resources, such as GPUs, may be shared among more than one compute unit of a cluster through the use of NT partitions or domain-based partitions by PCIe fabrics and PCIe switches.

FIG. 5 is a block diagram illustrating resource elements of a computing platform 500, such as computing platform 110. The resource elements are coupled through a PCIe fabric provided by fabric module 520. PCIe fabric links 501 and 507 each provide a PCIe link to the interior of the enclosure that includes computing platform 500. The cluster PCIe fabric link 508 includes an external PCIe link for interconnecting the various enclosures that comprise the cluster.

Multiple instances of resource units 510, 530, 540, and 550 are typically provided and may be logically coupled through a PCIe fabric established by fabric module 520. More than one fabric module 520 may be included to implement the PCIe fabric, depending in part on the number of resource units 510, 530, 540, and 550.

The modules of fig. 5 each include one or more PCIe switches (511, 521, 531, 541, 551), one or more power control modules (512, 522, 532, 542, 552) with associated hold circuits (513, 523, 533, 543, 553), power links (518, 528, 538, 548, 558), and internal PCIe links (517, 527, 537, 547, 557). It will be appreciated that variations are possible and one or more of the components of each module may be omitted.

The fabric module 520 provides at least a portion of a peripheral component interconnect express (PCIe) fabric including PCIe links 501 and 508. PCIe links 508 provide external interconnections for the devices of the compute/storage cluster, such as to interconnect various compute/storage rack-mounted modules. PCIe links 501 and 507 provide internal PCIe communication links and interconnect one or more PCIe switches 521. The fabric module 520 also provides one or more ethernet network links 526 via a network switch 525. Various sideband or auxiliary links 527 may also be employed in fabric module 520, such as a system management bus (SMBus) link, a Joint Test Action Group (JTAG) link, an integrated circuit bus (I2C) link, a Serial Peripheral Interface (SPI), a Controller Area Network (CAN) interface, a Universal Asynchronous Receiver Transmitter (UART) interface, a Universal Serial Bus (USB) interface, or any other communication interface. Additional communication links may be included that are not shown in fig. 5 for clarity.

Each of links 501 and 508 may include PCIe signaling of various widths or lanes. PCIe may support multiple bus widths, such as x1, x4, x8, x16, and x32, where each bus width includes additional "lanes" for data transfers. PCIe also supports the transfer of sideband signaling (such as SMBus and JTAG), and associated clock, power, and boot, among other signaling. For example, each of the links 501-508 may include a PCIe link having four lanes, i.e., "x 4" PCIe link, a PCIe link having eight lanes, i.e., "x 8" PCIe link, or a PCIe link having 16 lanes, i.e., "x 16" PCIe link, as well as other lane widths.

A power control module (512, 522, 532, 542, 552) may be included in each module. The power control module receives source input power over an associated input power link (519, 529, 539, 549, 559) and converts/conditions the input power for use by elements of the associated module. The power control module distributes power to each element of the associated module through the associated power link. The power control module includes circuitry to selectively and individually provide power to any of the elements of the associated module. The power control module may receive control instructions from the optional control processor over an associated PCIe link or sideband link (not shown in fig. 5 for clarity). In some embodiments, the operation of the power control module is provided by the processing elements discussed with respect to control processor 524. The power control module may include various power supply electronics, such as power regulators, boost converters, buck-boost converters, power factor correction circuits, and other power electronics. Various magnetic, solid-state and other electronic components are typically sized according to the maximum power consumption of a particular application, and these components are attached to an associated circuit board.

The holding circuit (513, 523, 533, 543, 553) comprises an energy storage device for storing power received over the power link for use during a power interruption event, such as a loss of input power. The holding circuit may include a capacitive storage device, such as a capacitor array, among other energy storage devices. Excess or remaining retained power may be retained for future use, bled into dummy loads, or redistributed to other devices over a PCIe power link or other power link.

Each PCIe switch (511, 521, 531, 541, 551) includes one or more PCIe cross point switches that logically interconnect PCIe links of the associated PCIe links based at least on traffic carried by the associated PCIe links. Each PCIe switch establishes switched connections between any PCIe interfaces handled by each PCIe switch. In some embodiments, ones of the PCIe switches comprise a PLX/Broadcom/AvagoPEX 879624-port, a 96-lane PCIe switch chip, a PEX 872510-port, a 24-lane PCIe switch chip, a PEX97xx chip, a PEX9797 chip, or other PEX87xx/PEX97xx chips. In some embodiments, redundancy is established via one or more PCIe switches, such as having primary and secondary/standby PCIe switches in the PCIe switch. Failover from the primary PCIe switch to the secondary/standby PCIe switch may be handled by at least the control processor 524. In some embodiments, redundant PCIe links to different PCIe switches may be used to provide primary and secondary functions in the different PCIe switches. In other embodiments, redundant links to the same PCIe switch may be used to provide primary and secondary functions in the same PCIe switch.

PCIe switches 521 each include a cluster interconnect interface 508, the cluster interconnect interface 508 being employed to interconnect additional modules of the storage system in additional enclosures. PCIe interconnects are provided between external systems (such as other storage systems) through associated external connectors and external wiring. These connections may be PCIe links provided by any of the included PCIe switches as well as other PCIe switches not shown, for interconnecting other modules of the storage system via the PCIe links. PCIe links for the cluster interconnect may terminate at external connectors, such as micro Serial Attached Scsi (SAS) HD connectors, zSFP + interconnects, or Quad Small Form Factor Pluggable (QSFFP) or QSFP/QSFP + jacks, which are employed to carry PCIe signaling over associated wiring, such as micro SAS or QSFFP wiring. In a further embodiment, a MiniSAS HD cable is used, which drives 12Gb/s as opposed to a standard SAS cable of 6 Gb/s. The 12Gb/s can support at least PCIe generation 3.

PCIe link 501-508 may also carry nvme (nvmexpress) traffic issued by a host processor or host system. Nvme (nvm express) is an interface standard for mass storage devices such as hard disk drives and solid state memory devices. NVMe may replace the serial ata (sata) interface for interfacing with mass storage devices in personal computer and server environments. However, these NVMe interfaces are limited to a one-to-one host drive relationship, similar to SATA devices. In the embodiments discussed herein, a PCIe interface may be employed to transport NVMe traffic and a multi-drive system comprising a number of storage drives is presented as one or more NVMe Virtual Logical Unit Numbers (VLUNs) on the PCIe interface.

Each resource unit of fig. 5 also includes an associated resource element. The storage module 510 includes one or more storage drives 514. The computing module 530 includes one or more Central Processing Units (CPUs) 534, a storage system 535, and software 536. Graphics module 540 includes one or more Graphics Processing Units (GPUs) 544. The network module 550 includes one or more Network Interface Cards (NICs) 554. It should be understood that other elements may be included in each resource unit, including memory devices, auxiliary processing devices, support circuitry, circuit boards, connectors, module housings/chassis, and other elements.

Fig. 6A and 6B illustrate example graphics processing configurations. Graphics modules 640 and 650 may include two different styles of graphics modules. The first pattern 640 includes a GPU 641 as well as a CPU 642 and a PCIe root complex 643, sometimes referred to as a PCIe host. The second style 650 includes a GPU651, the GPU651 serving as a PCIe endpoint 653, sometimes referred to as a PCIe device. Each of the modules 640 and 650 may be included in a carrier (such as a rack-mount assembly). For example, module 640 is included in component 610 and module 650 is included in component 620. These rack mounted components may include a JBOD carrier, which is typically used to carry storage drives, hard disk drives, or solid state drives. An example rack-mount physical configuration is shown in housing 190 of fig. 1 and fig. 8-9 below.

FIG. 6A illustrates a first example graphics processing configuration. Multiple graphics modules 640, each including a GPU 641, a CPU 642, and a PCIe root complex 643 may be coupled to a controller, such as a CPU 531 in a compute module 530, through a PCIe switch 630. PCIe switch 630 may include an isolation element 631, such as a non-transparent port, a logical PCIe domain, a port isolation, or a Tunnel Window Connection (TWC) mechanism that allows PCIe hosts to communicate over a PCIe interface. Typically, only one "root complex" is allowed on one PCIe system bus. However, using some form of PCIe interface isolation between various devices, more than one root complex may be included on an enhanced PCIe fabric as discussed herein.

In FIG. 6A, each GPU 641 is accompanied by a CPU 642 and an associated PCIe root complex 643. Each CPU 531 is accompanied by an associated PCIe root complex 532. To advantageously allow these PCIe root complex entities to communicate with the control host CPU 531, the isolation element 631 is included in the PCIe switch circuitry 630. Thus, the compute module 530 and each graphics module 640 may include their own root complex structure. Further, when employed in a separate enclosure, graphics module 640 may be included on a carrier or modular chassis that is insertable into or removable from the enclosure. The compute module 530 may dynamically add, remove, and control a large number of graphics modules with root complex elements in this manner. DMA transfers may be used to transfer data between the computing module 530 and each individual graphics module 640. Thus, a cluster of GPUs may be created and controlled by a single compute module or host CPU. This main CPU can orchestrate the tasks and graphics/data processing for each of the graphics module and GPU. Additional PCIe switch circuitry may be added to scale up the number of GPUs while maintaining isolation between the root complexes for DMA transfer of data/control between the host CPU and each individual GPU.

FIG. 6B illustrates a second example graphics processing configuration. A plurality of graphics modules 650, including at least a GPU651 and a PCIe endpoint element 653, may be coupled to a controller, such as compute module 530, through a PCIe switch 633. In FIG. 6B, each GPU651 is optionally accompanied by a CPU 652, and the graphics module 650 acts as a PCIe endpoint or device that does not have a root complex. The computation modules 530 may each include a root complex structure 532. When employed in a separate enclosure, the graphics module 650 may be included on a carrier or modular chassis that is insertable into or removable from the enclosure. The calculation module 530 may dynamically add, remove, and control a large number of graphics modules as endpoint devices in this manner. Thus, clusters of GPUs may be created and controlled by a single compute module or host CPU. This host CPU can orchestrate the tasks and graphics/data processing for each of the graphics module and GPU. Additional PCIe switch circuits may be added to scale up the number of GPUs.

Fig. 7 is a block diagram illustrating an example physical configuration of a storage system 700. Fig. 7 includes a graphics module 540 in a housing similar to the computing and other modules. FIGS. 8 and 9 show a graphics module that may be included in a separate enclosure from enclosure 701, such as a JBOD enclosure that is typically configured to hold disk drives. Enclosure 701, as well as the enclosures in fig. 8 and 9, may be communicatively coupled by one or more external PCIe links, such as by links provided by fabric module 520.

Fig. 7 is a block diagram illustrating various modules related to the midplane (midplane) of the previous figures. The elements of fig. 7 are shown physically mated to the midplane assembly. Midplane assembly 740 includes a circuit board component and a plurality of physical connectors for mating with any associated interposer (interposer) assembly 715, memory sub-housing 710, fabric module 520, compute module 530, graphics module 540, network module 550, or power module 750. Midplane 740 includes one or more printed circuit boards, connectors, physical support members, chassis elements, structural elements and associated links as metal traces or optical links for interconnecting the various elements of fig. 7. The midplane 740 may serve as a backplane (backplane), but rather than having a slider or module mated on only one side as in a single ended backplane embodiment, the midplane 740 has sliders or modules mated on at least two sides (i.e., front and back). The elements of fig. 7 may correspond to similar elements of the figures herein, such as computing platform 100, although variations are possible.

Fig. 7 shows a number of elements included in a 1U housing 701. The enclosure may instead have any multiple of a standardized computer rack height, such as 1U, 2U, 3U, 4U, 5U, 6U, 7U, etc., and may include associated chassis, physical supports, cooling systems, mounting features, cases, and other enclosure elements. Typically, each slider or module will fit into an associated slot or groove feature included in the chassis portion of the housing 701 to slide into a predetermined slot and guide one or more connectors associated with each module to mate with one or more associated connectors on the midplane 740. System 700 implements hot-swapping (hot-swapping) of any of the modules or sliders, and may include other features such as power lights, activity indicators, external management interfaces, and the like.

The memory modules 510 each have an associated connector 716, the connector 716 mating into a mating connector of an associated interposer assembly 715. Each interposer assembly 715 has an associated connector 781 that mates with one or more connectors on the midplane 740. In this embodiment, up to eight memory modules 510 may be inserted into a single interposer assembly 715, which interposer assembly 715 then mates to multiple connectors on the midplane 740. These connectors may be a common or shared style/type used by the computing module 530 and the connector 783. Additionally, each set of the storage module 510 and the interposer component 715 can be included in a subassembly or sub-housing 710, which subassembly or sub-housing 710 can be inserted into the midplane 740 in a modular fashion. The computing modules 530 each have an associated connector 783, which connector 783 may be a similar type of connector as the interposer assembly 715. In some embodiments, such as the embodiments above, the computing modules 530 are each inserted into more than one mating connector on the midplane 740.

Fabric module 520 is coupled to midplane 740 via connector 782 and provides cluster-wide access to the storage and processing components of system 700 through cluster interconnect link 793. The fabric module 520 provides control plane access between controller modules of other 1U systems through control plane link 792. In operation, the fabric modules 520 are each communicatively coupled with the compute module 530, the graphics module 540, and the storage module 510 over a PCIe mesh via links 782 and midplane 740, such as shown in fig. 7.

Graphics module 540 includes one or more Graphics Processing Units (GPUs) and any associated support circuitry, memory elements, and general purpose processing elements. Graphics module 540 is coupled to midplane 740 via connector 784.

The network module 550 includes one or more Network Interface Card (NIC) elements that may further include transceivers, transformers, isolation circuitry, buffers, and the like. Network module 550 may include gigabit ethernet interface circuitry that may carry ethernet traffic and any associated Internet Protocol (IP) and Transmission Control Protocol (TCP) traffic, as well as other network communication formats and protocols. The network module 550 is coupled to the midplane 740 via a connector 785.

The cluster interconnect link 793 may include a PCIe link or other links and connectors. PCIe links for external interconnects may terminate at external connectors, such as micro SAS or micro SAS HD jacks or connectors-employed to carry PCIe signaling through micro SAS cabling. In a further embodiment, a micro SAS HD cable is used, driving 12Gb/s as opposed to a standard SAS cable of 6 Gb/s. The 12Gb/s can support the PCIe third generation. Quad (4-channel) small form factor pluggable (QSFP or QSFP +) connectors or jacks may also be employed for carrying PCIe signaling.

Control plane link 792 may include an ethernet link for carrying control plane communications. The associated ethernet jack may support 10 gigabit ethernet (10GbE) as well as other throughputs. Additional external interfaces may include PCIe connectors, FiberChannel connectors, management console connectors, sideband interfaces (such as USB, RS-232), video interfaces (such as Video Graphics Array (VGA)), high-density media interfaces (HDMI), Digital Video Interfaces (DVI), and others, such as keyboard/mouse connectors.

External link 795 may comprise a network link that may comprise Ethernet, TCP/IP, Infiniband, iSCSI, or other external interface. External link 795 may include a link for communicating with an external system, such as a host system, management system, end-user device, internet system, packet network, server, or other computing system that includes other enclosures similar to system 700. The external link 795 may include a Quad Small Form Factor Pluggable (QSFFP) or quad (4-channel) small form factor pluggable (QSFP or QSFP +) jack, or zSFP + interconnect, which carries at least 40GbE signaling.

In some embodiments, the system 700 includes a housing or enclosure element, a chassis, and a midplane assembly that may accommodate flexible configuration and arrangement of modules and associated circuit cards. Although fig. 7 illustrates the memory module mating and controller module on a first side of the midplane assembly 740 and the various modules mating on a second side of the midplane assembly 740, it is to be understood that other configurations are possible. The system 700 may include a chassis to accommodate any of the following configurations, either in a front loading configuration or a rear loading configuration: storage modules each containing a plurality of SSDs; a module containing an HHHL card (half-high half-long PCIe card) or an FHHL card (full-high half-long PCIe card), which may include a graphics card or Graphics Processing Unit (GPU), a PCIe memory card, a PCIe network adapter, or a host bus adapter; a module having a PCIe card (full-length PCIe card) including a controller module, which may include nVIDIA Tesla, nVIDIA Jetson, or Intel Phi processor cards, as well as other processing or graphics processors; a module comprising a 2.5 inch PCIe SSD; or a cross-connect module, an interposer module, and a control element.

Additionally, power and associated power control signaling for the various modules of system 700 is provided by one or more power supply modules 750 over associated links 781, which may include links of one or more different voltage levels (such as +12VDC or +5VDC, etc.). Although the power module 750 is shown in fig. 7 as being included in the system 700, it should be understood that the power module 750 may instead be included in a separate enclosure (such as a separate 1U enclosure). Each power supply node 750 also includes a power link 790 for receiving power from a power source, such as AC or DC input power.

Additionally, power conservation circuitry may be included in the conservation module 751, and the conservation module 751 may deliver conservation power through the link 780 in response to a loss of power in the link 790 or a failure from the power supply module 750. Power conservation circuitry may also be included on each slider or module. This power conservation circuitry may be used to provide temporary power to the associated slider or module during a power interruption, such as when primary input or system power is lost from a power source. In addition, during use of the retention power, portions of each module may be selectively powered down using processing portions of each slider or module, depending on usage statistics and other considerations. This retention circuitry may provide sufficient power to commit the in-flight (in-flight) write data during a power interruption or power loss event. These power outages and loss of power events may include loss of power from a power source, or may include removal of a slider or module from an associated socket or connector on the midplane 740. The retention circuitry may include an array of capacitors, super-capacitors, ultra-capacitors, batteries, fuel cells, or other energy storage components, as well as any associated power control, conversion, regulation, and monitoring circuitry.

Fig. 8 is a block diagram illustrating an example physical configuration of a graphics module carrier housing. In this embodiment, a JBOD assembly 800 is employed, the JBOD assembly 800 having a plurality of slots or shelves provided by a housing 801 that includes a chassis and other structural/enclosure components. The cradle in JBOD component 800 is generally configured to hold a storage drive or disk drive, such as a HDD, SSD, or other drive, which may still be inserted into a cradle or slot of enclosure 801. May include a mixture of disk drive modules, graphics modules, and network modules (550). JBOD component 800 may receive input power over power link 790. An optional power supply 751, fabric module 520 and retention circuitry 751 are shown in fig. 8.

JBOD carrier 802 may be employed to hold graphics modules 650 or storage drives into the various bays of JBOD component 800. In fig. 8, each graphics module occupies only one slot or tray. Fig. 8 shows 24 graphics modules 650 included in each slot/tray. The graphics modules 650 may each include a carrier or slider that carries the GPU, CPU, and PCIe circuitry assembled into a removable module. Graphics module 650 may also include a carrier circuit board and connectors to ensure that each GPU, CPU, and PCIe interface circuitry can be physically, electrically, and logically mated into an associated cradle. In some embodiments, graphics modules 650 in FIG. 8 each comprise an nVIDIAJetson module that fits into a carrier configured to be inserted into a single bay of JBOD enclosure 800. A backplane assembly 810 is included, the backplane assembly 810 including connectors, interconnects, and PCIe switch circuitry to couple slots/brackets on external control plane link 792 and external PCIe link 793 to a PCIe fabric provided by another enclosure, such as enclosure 701.

JBOD carriers 802 connect to backplane assembly 810 via one or more associated connectors for each carrier. The backplane assembly 810 may include an associated mating connector. These connectors on each of the JBOD carriers 802 may include U.2 drive connectors, also known as SFF-8639 connectors, which may carry PCIe or NVMe signaling. The backplane component 810 can then route this signaling to the fabric modules 520 and associated PCIe switch circuitry of the JBOD component 800 for communicatively coupling the modules to the PCIe fabric. Thus, when one or more graphics processing modules (such as graphics module 650 in FIG. 7) are loaded, the graphics processing modules are inserted into a tray that is typically reserved for storage drives coupled through U.2 drive connectors. These U.2 drive connectors may carry per-carriage x4PCIe interfaces.

In another example carrier configuration, fig. 9 is presented. Fig. 9 is a block diagram illustrating another example physical configuration of a graphics module carrier housing. In this embodiment, a JBOD component 900 is employed, the JBOD component 900 having a plurality of slots or shelves provided by a housing 901, the housing 901 including the chassis and other structural/enclosure components. The cradle in the JBOD component 900 is generally configured to hold a storage drive or disk drive, such as a HDD, SSD, or other drive, which can still be inserted into the cradle or slot of the enclosure 901. May include a mixture of disk drive modules, graphics modules, and network modules (550). The JBOD components 900 may receive input power over a power link 790. An optional power supply 751, fabric module 520 and retention circuitry 751 are shown in fig. 9.

JBOD carrier 902 may be employed to hold graphics modules 640 or storage drives to the various bays of JBOD component 900. In fig. 9, each graphics module occupies four (4) slots or brackets. Fig. 9 shows 6 graphics modules 640 included in the associated spanned slot/tray. The graphics modules 640 may each include a carrier or slider that carries the GPU, CPU, and PCIe circuitry assembled into a removable module. Graphics module 640 may also include a carrier circuit board and connectors to ensure that each GPU, CPU, and PCIe interface circuitry can be physically, electrically, and logically mated into an associated cradle. In some embodiments, graphics module 640 comprises an nVIDIA Tesla module that fits into a carrier configured to be inserted into a four cradle span of JBOD enclosure 900. A backplane assembly 910 is included, the backplane assembly 910 including connectors, interconnects, and PCIe switch circuitry to couple the slot/carrier to a PCIe fabric provided by another enclosure (such as enclosure 701) through an external control plane link 792 and an external PCIe link 793.

JBOD carriers 902 connect to backplane assembly 910 via more than one associated connector for each carrier. The backplane assembly 910 may include an associated mating connector. These separate connectors on each of the JBOD carriers 902 may include a separate U.2 drive connector, also known as an SFF-8639 connector, which may carry PCIe or NVMe signaling. The backplane component 910 may then route this signaling to the fabric modules 520 or associated PCIe switch circuitry of the JBOD component 900 for communicatively coupling the modules to the PCIe fabric. When one or more graphics processing modules (such as graphics module 640) are installed, the graphics processing modules are each inserted to span more than one tray, including connecting to more than one tray connector and more than one tray PCIe interface. These separate carriers are typically reserved for storage drives coupled through separate carrier U.2 drive connectors and per-carrier x4PCIe interfaces. In some embodiments, a combination of graphics modules 640 spanning more than one tray and graphics modules 650 using only one tray may be employed.

Fig. 9 is similar to fig. 8, except that graphics module 640 uses a larger chassis footprint (font) to advantageously accommodate larger graphics module power or PCIe interface requirements. In fig. 8, the power supplied to a single bay/slot is sufficient to power the associated graphics module 650. However, in fig. 9, the greater power requirements of graphics module 640 preclude the use of a single slot/tray, but instead span four (4) trays by a single module/carrier to provide the approximately 300 watts required by each graphics processing module 640. Power may be drawn from 12 volt and 5 volt power supplies to establish 300 watts of power for each "spanned" cradle. A single modular slider or carrier may physically span multiple slot/cradle connectors to allow power and signaling for the cradles to be employed. Further, PCIe signaling may span multiple chassis, and a wider PCIe interface may be employed for each graphics module 640. In one embodiment, each graphics module 650 has a x4PCIe interface, and each graphics module 640 has a x16PCIe interface. Other PCIe lane widths are possible. In other embodiments, a different number than 4 brackets may be spanned.

In fig. 8 and 9, PCIe signaling and other signaling and power are connected on the "back" side via backplane components, such as components 810 and 910. This "back" side includes the inner portion of each carrier inserted into the corresponding bracket or brackets. However, additional communicative coupling may be provided for each graphics processing module on the "front" side of the module. The graphics modules may be coupled via a front-side point-to-point or mesh communication link 920 that spans more than one graphics module. In some embodiments, an NVLink interface, InfiniBand, point-to-point PCIe link, or other high speed serial near-range interface is applied to couple two or more graphics modules together for further communication between the graphics modules.

FIG. 10 illustrates components of computing platform 1000 in one embodiment. Computing platform 1000 includes several elements communicatively coupled through a PCIe fabric formed from various PCIe links 1051 and 1053 and one or more PCIe switch circuits 1050. A host processor or Central Processing Unit (CPU) may be coupled to this PCI fabric for communicating with various elements, such as those discussed in the preceding figures. However, in FIG. 10, the host CPU 1010 and GPU 1060-. GPUs 1060 and 1063 each include graphics processing circuitry, PCIe interface circuitry, and are coupled to associated memory devices 1065 by corresponding links 1058a-1058n and 1059a-1059 n.

In fig. 10, a management processor (CPU)1020 may establish a peer-to-peer arrangement between GPUs over a PCIe fabric by providing at least an isolation function 1080 in the PCIe fabric, the isolation function 1080 configured to isolate a device PCIe address domain associated with the GPU from a local PCIe address domain associated with a host CPU 1010 that initiates the peer-to-peer arrangement between the GPUs. In particular, host CPU 1010 may wish to initiate a peer-to-peer arrangement (such as a peer-to-peer communication link) between two or more GPUs in platform 1000. This peer-to-peer arrangement enables the GPUs to communicate with each other more directly to bypass the communication passing through the host CPU 1010.

For example, without a peer-to-peer arrangement, traffic between GPUs is typically routed through a host processor. This can be seen in fig. 10 as communication link 1001, communication link 1001 showing communications between GPU 1060 and GPU 1061, routed through PCIe links 1051 and 1056, PCIe switch 1050, and host CPU 1010. By handling traffic through many links, switch circuitry, and processing elements, the latency of this arrangement may be higher, and there are other bandwidth reductions. Advantageously, isolation function 1080 may be built into the PCIe fabric, which isolation function 1080 allows GPU 1060 to communicate more directly with GPU 1061, bypassing link 1051 and host CPU 1010. Less delay and higher bandwidth communications are encountered. This peer-to-peer arrangement is shown in fig. 10 as a peer-to-peer communication link 1002.

The management CPU1020 may include control circuitry, processing circuitry, and other processing elements. Management CPU1020 may include elements of management processor 110 of fig. 1-2 or management processor 300 of fig. 3. In some embodiments, the management CPU1020 may be coupled to a PCIe fabric or to management/control ports on various PCIe switch circuitry, or incorporated into PCIe switch circuitry or control portions thereof. In fig. 10, a management CPU1020 establishes isolation functions and facilitates establishment of a peer-to-peer link 1002. Further discussion of embodiments of operations of peer-to-peer arranged elements and managing CPU1020 and associated circuitry is found in fig. 11-14. The management CPU1020 may communicate with PCIe switch 1050 via management links 1054 and 1055. These management links include PCIe links, such as x1 or x4PCIe links, and may include I2C links, network links, or other communication links.

FIG. 11 illustrates components of computing platform 1100 in one implementation. Platform 1100 illustrates one more detailed implementation example for the elements of fig. 10, although variations are possible. Platform 1100 includes host processor 1110, memory 1111, control processor 1120, PCIe switch 1150, and GPU1161 & 1162. Host processor 1110 and GPUs 1161-1162 are communicatively coupled through switch circuitry 1159 in PCIe switch 1150, which switch circuitry 1159, along with PCIe links 1151-1155, form part of a PCIe fabric. Control processor 1120 also communicates with PCIe switch 1150 over a PCIe link (i.e., link 1156), but this link typically includes a control port, a management link (management link), or other link functionally dedicated to controlling the operation of PCIe switch 1150. Other embodiments have control processor 1120 coupled via a PCIe fabric.

In FIG. 11, two or more PCIe addressing domains are established. These address fields (1181, 1182) are established as part of the isolation function to logically isolate host processor 1110's PCIe traffic from GPU 1161-1162. In addition, a composite PCIe device is created by control processor 1120 to further include isolation functions between PCIe address domains. This isolation function ensures isolation of host processor 1110 from GPUs 1161 and 1162 and an enhanced peer-to-peer arrangement between the GPUs.

To achieve this isolation function, various elements of fig. 11, such as those indicated above, are employed. The isolation function 1121 includes an address trap 1171-. These address traps include an address monitor portion and an address translation portion. The address monitoring portion monitors the PCIe destination address in PCIe frames or other PCIe traffic to determine if one or more affected addresses are encountered. If these addresses are encountered, the address trap translates the original PCIe destination address into a modified PCIe destination address and transmits PCIe traffic for delivery over the PCIe fabric to the host or device corresponding to the modified PCIe destination address. Address traps 1171-1173 may include one or more address translation tables or other data structures, such as an example table 1175, that map translations between incoming destination addresses and outgoing destination addresses to modify PCIe addresses accordingly. Table 1175 contains entries that translate addressing between synthetic devices in the local address space and physical/real devices in the global/device address space.

The composition device 1141-1142 includes a logical PCIe device that represents a corresponding GPU of the GPUs 1161-1162. Composition device 1141 represents GPU1161, and composition device 1142 represents GPU 1162. As will be discussed in further detail below, when host processor 1110 issues PCIe traffic for delivery to GPU1161 & 1162, this traffic is actually addressed for delivery to composition device 1141 & 1142. In particular, for any PCIe traffic issued by host processor 110 for GPUs 1161-1162, the device driver of host processor 1110 uses destination addressing corresponding to the associated synthetic device 1141 and 1142. This traffic is carried over the PCIe fabric and switch circuitry 1159. Address trap 1171-1172 intercepts this traffic, including the addressing of compositing device 1141-1142, and reroutes this traffic for delivery to the addressing associated with GPUs 1161-1162. Similarly, PCIe traffic issued by GPUs 1161-1162 is addressed by the GPU for delivery to host processor 1110. In this manner, each of GPU 1141 and GPU 1142 may operate with respect to host processor 1110 using PCIe addressing corresponding to composite device 1141 and composite device 1142.

The host processor 1110 and the synthetic devices 1141-1142 are included in a first PCIe address domain (i.e., the host processor 1110's ' local ' address space 1181). Control processor 1120 and GPUs 1161-1162 are included in a second PCIe address domain (i.e., global address space 1182). The naming of the address space is merely exemplary and other naming schemes may be employed. The global address space 1182 may be used by the control processor 1120 to service or un-service devices (such as GPUs) for use by various host processors. Thus, any number of GPUs may be communicatively coupled to one host processor, and these GPUs may be dynamically added and removed for use by any given host processor.

It should be noted that the synthesizing devices 1141 and 1142 each have a corresponding base address register (BAR 1143 and 1144) and a corresponding device address 1145 and 1146 in the Local Addressing (LA) domain. In addition, the GPUs 1161 and 1162 each have a corresponding base address register (BAR 1163 and 1164) and a corresponding device address 1165 and 1166 in the Global Addressing (GA) domain. The LA and GA addresses correspond to the addressing that will be taken to reach the associated synthetic or actual device.

To further illustrate the operation of the various addressing domains, FIG. 12 is presented. FIG. 12 illustrates components of computing platform 1200 in one embodiment. Platform 1200 includes host processor 1210, control processor 1220, and host processor 1230. Each host processor is communicatively coupled to a PCIe fabric, such as any of those PCIe fabrics discussed herein. Further, control processor 1220 may be coupled to the PCIe fabric or to management ports on various PCIe switch circuitry, or incorporated into PCIe switch circuitry or a control portion thereof.

FIG. 12 is a schematic representation of PCIe addressing and associated fields formed between PCIe address spaces. Each host processor has a corresponding "local" PCIe address space, such as the PCIe address space corresponding to the associated root complex. Each separate PCIe address space may comprise the entire domain of the 64-bit address space of the PCIe specification, or a portion thereof. In addition, an additional PCIe address space/domain is associated with control processor 1220, referred to herein as a 'global' or 'device' PCIe address space.

The isolation function with the associated address trap forms a link between the synthetic device and the actual device. A composite device represents an actual device in another PCIe space in addition to the device's own PCIe space. In fig. 12, various devices (such as a GPU or any other PCIe device) are configured to reside within a global address space controlled by control processor 1220. In fig. 12, the actual device is represented by the symbol 'D'. The various synthetic devices, represented in FIG. 12 by the 'S' symbol, are configured to reside on the associated local address space for the corresponding host processor.

In FIG. 12, four address traps are shown, namely address trap 1271 and 1274. Address traps are formed to couple various synthetic devices to various physical/real devices. These address traps (such as those discussed in fig. 11) are configured to intercept PCIe traffic directed to the synthetic device and forward to the corresponding physical device. Likewise, the address trap is configured to intercept PCIe traffic directed to the physical device and forward to the corresponding synthetic device. Address translation is performed to alter PCIe addresses of PCIe traffic corresponding to various address traps.

Advantageously, any host processor with a corresponding local PCIe address space may be dynamically configured to communicate with any PCIe device residing in the global PCIe address space, and vice versa. Devices may be added and removed during host processor operation, which may support scaling up or scaling down the available resources of each added/removed device. When a GPU is employed as a device, then GPU resources may be added to or removed from any host processor on the fly. Hot-plugging of PCIe devices is enhanced and devices installed into rack-mounted components including tens of GPUs can be intelligently allocated and reallocated to host processors as needed. The synthetic device may be created/destroyed as needed, or a pool of synthetic devices may be provided for a particular host, and appropriate addressing may be configured for the synthetic device to allow the corresponding address trap function to route traffic to the desired GPU/device. The control processor 1220 processes the setup of the composition device, the address trap, the composition device, and the provision service/cancellation service of the device/GPU.

Turning now to exemplary operation of the elements of fig. 10-12, fig. 13 is presented. Fig. 13 is a flowchart illustrating example operations of a computing platform, such as computing platform 1000, 1100, or 1200. The operations of fig. 13 are discussed in the context of the elements of fig. 11. However, it should be understood that elements of any of the figures herein may be employed. Fig. 13 also discusses the operation of a peer-to-peer arrangement between GPUs or other PCIe devices, such as seen through peer-to-peer link 1002 in fig. 10 or peer-to-peer link 1104 in fig. 11. Peer-to-peer links allow data or other information to be transferred more directly between PCIe devices (such as GPUs) for enhanced processing, increased data bandwidth, and lower latency.

In fig. 13, a PCIe fabric is provided (1301) to couple the GPU and the one or more host processors. In FIG. 11, this PCIe fabric may be formed between PCI switch 1150 and PCIe links 1151 and 1155 as well as between additional PCIe switches coupled through PCIe links. However, the GPU and host processor at this time are only electrically coupled to the PCIe fabric, and have not been configured to communicate. A host processor (such as host processor 1110) may wish to communicate with one or more GPU devices and, in addition, allow those GPU devices to communicate through a peer-to-peer arrangement to enhance the processing performance of the GPU. Control processor 1120 may establish (1302) a peer-to-peer arrangement between GPUs over a PCIe fabric. Once established, the control processor 1120 may dynamically add (1303) GPUs to the peer-to-peer arrangement and dynamically remove (1304) GPUs from the peer-to-peer arrangement.

To establish this peer-to-peer arrangement, the control processor 1120 provides (1305) an isolation function to isolate the device PCIe address domain associated with the GPU from the local PCIe address domain associated with the host processor. In fig. 11, host processor 1110 includes or is coupled with a PCIe root complex that is associated with a local PCIe address space 1181. Control processor 1120 may provide a root complex for a 'global' or device PCIe address space 1182, or another element not shown in fig. 11 may provide such a root complex. Multiple GPUs are included in address space 1182, and global addresses 1165 and 1166 are employed as device/endpoint addresses for the associated GPUs. The two different PCIe address spaces are logically isolated from each other, and PCIe traffic or communications are not transmitted across the PCIe address spaces.

To interwork PCIe traffic or communications between PCIe address spaces, control processor 1120 builds (1306) a composite PCIe device that represents the GPU in the local PCIe address domain. The composite PCIe devices are formed according to the logic provided by PCIe switch 1150 or control processor 1120, and each composite PCIe device provides a PCIe endpoint that represents the associated GPU in the local address space of the particular host processor. In addition, each synthetic device is provided with address traps that intercept PCIe traffic destined for the corresponding synthetic device and reroute the PCIe traffic for delivery to the appropriate physical/real GPU. Thus, control processor 1120 establishes address trap 1171-. In a first embodiment, PCIe traffic issued by host processor 1110 may be addressed for delivery to composite device 1141, i.e., Local Address (LA) 1145. Composite device 1141 has been established as an endpoint for this traffic and address trap 1171 is established to redirect this traffic for delivery to GPU1161 at Global Address (GA) 1165. In a second embodiment, PCIe traffic issued by host processor 1110 may be addressed for delivery to composite device 1142, LA 1146. Composite device 1142 has been established as an endpoint for this traffic and address trap 1172 is established to redirect this traffic for delivery to GPU 1162 at GA 1166.

The processing of PCIe traffic issued by the GPU may work in a similar manner. In a first embodiment, GPU1161 issues traffic for delivery to host processor 1110, and this traffic may identify addresses in the local address space of host processor 1110 instead of global address space addresses. Trap 1171 identifies this traffic as traffic destined for host processor 1110 and redirects the traffic for delivery to host processor 1110 in the address domain/space associated with host processor 1110. In a second embodiment, GPU 1162 issues traffic for delivery to host processor 1110, and this traffic may identify addresses in the local address space of host processor 1110 instead of global address space addresses. Trap 1172 identifies this traffic as destined for host processor 1110 and redirects the traffic for delivery to host processor 1110 in the address domain/space associated with host processor 1110.

In addition to the host-to-device traffic discussed above, the quarantine function 1121 can provide a peer-to-peer arrangement between GPUs. Control processor 1120 establishes an address trap 1173, which address trap 1173 redirects 1308 peer traffic transmitted by the first GPU to the second GPU in the global/device PCIe address domain, which peer traffic indicates the second GPU as a destination in the local PCIe address domain. Each GPU does not have to know the different PCIe address space, such as in the host-device embodiment above, where the GPU uses the associated address in the host processor's local address space for traffic issued to the host processor. Likewise, each GPU may issue PCIe traffic for delivery to another GPU using addressing native to the local address space of host processor 1110 instead of addressing native to the global/device address space when engaging in peer-to-peer communications. However, since each GPU is configured to respond to addressing in the global address space, address trap 1173 is configured to redirect traffic accordingly. Since the host processor 1110 typically communicates with the GPUs to initialize the peer-to-peer arrangement between the GPUs, the GPUs use addressing of the host processor 1110's local address space. Although this peer-to-peer arrangement is facilitated by control processor 1120 managing PCIe fabric and isolation functions 1121, the isolation functions and different PCIe address spaces are generally not known to the host processor and GPU. Instead, the host processor communicates with the composition devices 1141 and 1142 as if they were actual GPUs. Likewise, GPUs 1161-1162 communicate with host processors and with each other without knowledge of the synthetic device or address trap function. Thus, traffic issued by GPU1161 for GPU 1162 uses addressing in the local address space of the host processor to which those GPUs are assigned. Address trap 1173 detects traffic by addressing in the local address space and redirects traffic using addressing in the global address space.

In one particular embodiment of peer-to-peer communication, the host processor will first establish an arrangement between the GPUs and instruct peer-to-peer control instructions that identify addressing to the GPUs within the host processor's local PCIe address space. Thus, even if the host processor communicates with a synthetic device established within the PCIe fabric or PCIe switch circuitry, the GPU is still under the control of the host processor. When GPU1161 has traffic for delivery to GPU 1162, GPU1161 will address traffic destined for GPU 1162 in the local address space (i.e., LA1146 associated with synthetic device 1142), and address trap 1173 will redirect this traffic to GA 1166. This redirection may include translating addressing between PCIe address spaces, such as by replacing or modifying addressing of PCIe traffic to include the redirection destination address instead of the original destination address. When GPU 1162 has traffic for delivery to GPU1161, GPU 1162 will address traffic destined for GPU1161 in the local address space (i.e., LA1145 associated with synthetic device 1141), and address trap 1173 will redirect this traffic to GA 1165. This redirection may include replacing or modifying addressing of the PCIe traffic to include the redirection destination address instead of the original destination address. Thus, a peer-to-peer link 1104 is logically created, which peer-to-peer link 1104 allows for a more direct flow of communications between the GPUs.

FIG. 14 is presented to illustrate additional details regarding address space isolation and selection of an appropriate address when communicatively coupling a host processor to a PCIe device (such as a GPU). In fig. 14, a computing platform 1400 is presented. Computing platform 1400 includes a number of host CPUs 1410, management CPUs 1420, PCIe fabric 1450, and one or more components 1401 and 1402 that house a plurality of associated GPUs 1462-1466 and corresponding PCIe switches 1451. The components 1401 and 1402 may include any of the chassis, rack mount, or JBOD components herein, such as found in fig. 1 and 7-9. A plurality of PCIe links interconnect the elements in FIG. 14, i.e., PCIe links 1453 and 1456. In general, PCIe link 1456 includes a specific control/management link that enables management (administrative) or management-level (management-level) access to the control of PCIe fabric 1450. However, it should be understood that links similar to other PCIe links may be employed instead.

According to the embodiments in fig. 10-13, isolation functionality may be established to allow PCIe devices (such as GPUs) to dynamically provide service/un-service from one or more host processors/CPUs. These isolation functions may provide separate PCIe address spaces or domains, such as a separate local PCIe address space for each host processor deployed, and a global or device PCIe address space shared by all real GPUs. However, when some additional downstream PCIe switch circuitry is employed, the overlap in addressing used within the local address space of the host processor and the global address space of the GPU may cause conflicts or errors when the PCIe switch circuitry is processing PCIe traffic.

Thus, FIG. 14 illustrates enhanced operations for selecting PCIe address assignments and address space configurations. Operation 1480 illustrates example operations of the management CPU1420 for use in configuring isolation functions and address domains/spaces. Management CPU1420 identifies (1481) when a downstream PCIe switch is employed, such as when an external component is coupled to a PCIe fabric coupling the host processor to additional compute or storage elements over a PCIe link. In FIG. 14, these downstream PCIe switches are indicated by PCIe switches 1451 and 1452. Management CPU1420 may use various discovery protocols over PCIe fabric, over-sideband signaling (such as I2C or ethernet signaling), or other processes to identify when to employ these downstream switches. In some embodiments, the downstream PCIe switch comprises a more primitive or less capable model/type of PCIe switch than the PCIe switch employed upstream, and the management CPU1420 may detect these configurations via model, or be programmed by an operator to compensate for this reduced functionality. The reduced functionality may include the inability to handle multiple PCIe addressing domains/spaces as efficiently as other types of PCIe switches, which may result in PCIe traffic conflicts. Thus, enhanced operation is provided in operations 1482-1483.

In operation 1482, the management CPU1420 establishes non-conflicting addressing in the device/global address space for each of the physical/actual PCIe devices with respect to the host processor's local PCIe address space. The non-conflicting addressing typically includes a unique, non-overlapping address that is employed for downstream/endpoint PCIe devices. This is done to prevent conflicts between PCIe addressing when a composite PCIe device is employed herein. When address translation is performed by various address trap elements to redirect PCIe traffic from the host processor's local address space to the PCIe device's global address space, conflicts are prevented by intelligently selecting addressing for the PCIe devices in the global address space. The global address space addresses for the devices are selected to be non-overlapping, uncommon or unique such that more than one host processor does not use similar device addresses in the associated local address space. These addresses are indicated to the host processors during boot, initialization, enumeration, or instantiation of the associated PCIe device and synthetic counterpart, such that any associated host driver employs unique addressing across the entire PCIe fabric, even though each host processor may have a logically separate/independent local address space. Once the addressing has been selected and indicated to the appropriate host processor, computing platform 1400 may operate 1483 the upstream PCIe switch circuitry and host processor according to the non-conflicting address space. Advantageously, many host processors are less likely to conflict with other host processors in PCIe traffic.

The description and drawings are included to depict specific embodiments to teach those skilled in the art how to make and use the best mode. For the purpose of teaching inventive principles, some conventional aspects have been simplified or omitted. Those skilled in the art will appreciate variations from these embodiments that fall within the scope of the disclosure. Those skilled in the art will also appreciate that the features described above can be combined in various ways to form multiple embodiments. Accordingly, the present invention is not limited to the specific embodiments described above, but only by the claims and their equivalents.

41页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:数字功率复用器

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!