Host managed coherent device memory
阅读说明:本技术 主机管理的相干设备存储器 (Host managed coherent device memory ) 是由 M·S·那图 V·桑吉潘 于 2019-05-28 设计创作,主要内容包括:系统或设备可以包括:处理器核心,该处理器核心包括一个或多个硬件处理器;用于缓存数据的处理器存储器;用于将处理器核心与一个或多个附接的存储器单元耦合的存储器链路接口;以及平台固件,其用于:确定设备跨存储器链路接口被连接到处理器核心;确定该设备包括附接的存储器;确定可用于处理器核心的附接的存储器的至少一部分的范围;将附接的存储器的该部分的范围映射到处理器存储器;并且其中,处理器核心用于使用附接的存储器的该部分的范围和处理器存储器来缓存数据。(The system or apparatus may include: a processor core including one or more hardware processors; a processor memory for caching data; a memory link interface for coupling a processor core with one or more attached memory units; and platform firmware to: determining that a device is connected to a processor core across a memory link interface; determining that the device includes attached memory; determining a range of at least a portion of memory available for attachment by a processor core; mapping a range of the portion of the attached memory to a processor memory; and wherein the processor core is to cache the data using the processor memory and a range of the portion of the attached memory.)
1. An apparatus, comprising:
a processor core;
a processor memory for caching data; and
platform firmware to:
determining that a device is connected to the processor core across a memory link interface;
determining that the device includes an attached memory unit;
determining a range of at least a portion of the attached memory units available to the processor core;
mapping a range of the portion of the attached memory unit to the processor memory; and wherein:
the processor core is to cache data using the range of the portion of the attached memory unit and the processor memory.
2. The apparatus of claim 1, wherein the memory link interface comprises a link conforming to one of an Intel Accelerator Link (IAL) protocol, a GenZ-based protocol, or a CAPI-based protocol.
3. The apparatus of claim 1, the platform firmware to receive a capability register block from the device, and wherein the platform firmware to determine a range of the at least a portion of the attached memory units from the capability register block.
4. The apparatus of claim 3, wherein the capability register block is a specific vendor specific extended capability (DVSEC) register block.
5. The apparatus of claim 1, the platform firmware to construct one or more Advanced Configuration and Power Interface (ACPI) tables using information received from the device or information received from an Extensible Firmware Interface (EFI) driver associated with the device.
6. The apparatus of claim 5, wherein the one or more ACPI tables comprise one or both of a static resource relationship table (SRAT) or a Heterogeneous Memory Attribute Table (HMAT).
7. The apparatus of claim 6, wherein the HMAT table comprises a system-local latency and bandwidth information structure and a memory subsystem address range structure.
8. The apparatus of claim 7, wherein the system-localized delay and bandwidth information structure comprises bandwidth and delay information of the attached memory cells.
9. The apparatus of claim 7, wherein the memory subsystem address range structure comprises fields to indicate a System Physical Address (SPA) cardinality of the attached memory unit and a space length of the attached memory unit available to the processor core.
10. The apparatus of claim 1, the platform firmware to:
determining that the attached memory cell is not initialized; and
causing the attached memory cell to be initialized.
11. A method for using a coherent memory, the memory comprising:
determining, by a driver associated with a device, that there is the device connected to a host processor across a link;
determining that the device comprises a coherent memory;
providing one or more attributes about the coherent memory to the host processor to map the coherent memory to system memory;
determining that the coherent memory is not initialized; and
initializing the coherent memory for use by the host processor, the host processor to use the coherent memory of the device and the system memory to store data.
12. The method of claim 11, wherein the instruction comprises an Extensible Firmware Interface (EFI) driver associated with the device.
13. The method of claim 12, comprising providing one or more bandwidth or delay attributes of the coherent memory to platform firmware to construct one or more Advanced Configuration and Power Interface (ACPI) tables.
14. The method of claim 13, wherein the one or more ACPI tables comprise one or more of a Heterogeneous Memory Attribute Table (HMAT) or a non-volatile dual in-line memory module firmware interface table (NFIT).
15. The method of claim 12, comprising notifying platform firmware that the coherent memory is available using a software call defined in an EFI-based initialization protocol.
16. The method of claim 11, wherein the instruction comprises an operating system driver associated with the device.
17. The method of claim 16, comprising notifying platform firmware that the coherent memory is available using one or more device-specific methods determined from the operating system driver.
18. A system, comprising:
a host processor core;
a system memory for caching data;
a device connected to the host processor by a link;
a coherent memory associated with the device; and
platform firmware to:
discovering the device at system startup;
determining one or more attributes of the coherent memory; and
mapping at least a portion of the coherent memory with the system memory to an address space; and is
The host processor is to cache data using the system memory and the coherent memory.
19. The system of claim 18, wherein the device comprises an accelerator circuit implemented at least partially in hardware to provide processing acceleration for the host processor.
20. The system of claim 19, further comprising an accelerator link coupling the accelerator to the host processor.
21. The system of claim 20, wherein the accelerator link conforms to one of an Intel Accelerator Link (IAL) based protocol, a GenZ based protocol, or a CAPI based protocol.
22. The system of claim 18, wherein the coherent memory comprises host managed device memory (HDM).
23. The system of claim 18, the platform firmware to:
receiving a capability register block from the device, the capability register block indicating one or more attributes of the coherent memory;
determining a memory size and address range available to the host processor from the capability register block; and
the memory size and the address range available to the host processor are mapped to an address space with the system memory.
24. The system of claim 23, the platform firmware to:
constructing one or more Advanced Configuration and Power Interface (ACPI) tables based on the attributes received in the capability register block; and
mapping the memory size and the address range to the address space using the one or more ACPI tables.
25. The system of claim 18, further comprising a device driver associated with the device, the device driver to:
initializing the coherent memory; and
providing one or more Advanced Configuration and Power Interface (ACPI) table data fragments for the coherent memory to the platform firmware to facilitate constructing the one or more ACPI tables.
Background
In computing, a cache is a component that stores data so that future requests for the data can be provided more quickly. For example, data stored in a cache may be the result of an earlier calculation, or a copy of data stored elsewhere. In general, a cache hit may occur when the requested data is found in the cache, and a cache miss may occur when the requested data is not found in the cache. A cache hit is provided by reading data from the cache, which is typically faster than recalculating results or reading from a slower data storage area. Thus, improvements in efficiency can often be achieved by providing more requests from the cache.
Drawings
FIG. 1 is a schematic diagram of a simplified block diagram of a system including a serial point-to-point interconnect for connecting I/O devices in a computer system, according to one embodiment.
Figure 2 is a schematic diagram of a simplified block diagram of a layered protocol stack, according to one embodiment.
FIG. 3 is a schematic diagram of an embodiment of a transaction descriptor.
Fig. 4 is a schematic diagram of an embodiment of a serial point-to-point link.
FIG. 5 is a schematic diagram of a processing system including a connected accelerator according to an embodiment of the disclosure.
Fig. 6 is a process flow diagram for discovering attached coherent memory in accordance with an embodiment of the present disclosure.
FIG. 7 is a process flow diagram for initializing attached coherent memory in accordance with an embodiment of the present disclosure.
Fig. 8 is a schematic diagram of an example embodiment of a field programmable gate array (FGPA) according to some embodiments.
Fig. 9 is a block diagram of a processor 900 that may have more than one core, may have an integrated memory controller, and may have integrated graphics, in accordance with various embodiments.
Fig. 10 depicts a block diagram of a system 1000 according to one embodiment of the present disclosure.
Fig. 11 depicts a block diagram of a more specific first example system 1100, according to an embodiment of the disclosure.
Fig. 12 depicts a block diagram of a more specific second example system 1300, according to an embodiment of the disclosure.
Fig. 13 depicts a block diagram of a SoC, in accordance with an embodiment of the present disclosure.
FIG. 14 is a block diagram comparing the conversion of binary instructions in a source instruction set to binary instructions in a target instruction set using a software instruction converter, according to an embodiment of the disclosure.
Detailed Description
In the following description, numerous specific details are set forth, such as examples of specific types of processor and system configurations, specific hardware structures, specific architectural and microarchitectural details, specific register configurations, specific instruction types, specific system components, specific processor pipeline stages, specific interconnect layers, specific grouping/transaction configurations, specific transaction names, specific protocol exchanges, specific link widths, specific implementations and operations, etc., in order to provide a thorough understanding of the present invention. It may be evident, however, to one skilled in the art that these specific details need not be employed to practice the presently disclosed subject matter. In other instances, detailed descriptions of the following known components or methods are avoided so as not to unnecessarily obscure the present disclosure: for example, specific and alternative processor architectures, specific logic circuits/code for the algorithms described, specific firmware code, low-level interconnect operation, specific logic configurations, specific manufacturing techniques and materials, specific compiler implementations, specific algorithmic representations in the form of code, specific power down and gating techniques/logic, and other specific operational details of computer systems.
Although the following embodiments may be described with reference to energy conservation, energy efficiency, processing efficiency, etc. in a particular integrated circuit (e.g., in a computing platform or microprocessor), other embodiments are also applicable to other types of integrated circuits and logic devices. Similar techniques and teachings of the embodiments described herein may be applied to other types of circuits or semiconductor devices that may equally benefit from these features. For example, the disclosed embodiments are not limited to server computer systems, desktop computer systems, laptop computers, or UltrabooksTMBut may also be used in other devices such as handheld devices, smart phones, tablet computers, other thin notebook computers, system on a chip (SOC) devices, and embedded applications. Some examples of handheld devices include cellular telephones, internet protocol devices, digital cameras, Personal Digital Assistants (PDAs), and handheld PCs. Here, the analogy for high performance interconnectThe techniques may be suitable for enhancing performance (or even saving power) in low power interconnect. Embedded applications typically include microcontrollers, Digital Signal Processors (DSPs), systems on a chip, network computers (netpcs), set-top boxes, hubs, Wide Area Network (WAN) switches, or any other system that can perform the functions and operations taught below. Furthermore, the apparatus, methods, and systems described herein are not limited to physical computing devices, but may also relate to software optimization for energy conservation and efficiency. As may become apparent in the following description, the embodiments of the methods, apparatus and systems described herein (whether with reference to hardware, firmware, software or a combination thereof) may be considered critical for the "green technology" future balanced with performance considerations.
As computing systems evolve, the components therein become more complex. The complexity of interconnect architectures used to couple and communicate between components has also increased to ensure that the bandwidth requirements for optimal component operation are met. Furthermore, different market segments require different aspects of the interconnect architecture to meet the respective market. For example, servers require higher performance, and mobile ecosystems can sometimes sacrifice overall performance for energy savings. However, the sole purpose of most structures is to provide the highest possible performance and maximum energy savings. Moreover, a variety of different interconnects may potentially benefit from the subject matter described herein.
Peripheral Component Interconnect (PCI) express (PCIe) interconnect fabric architecture and Quick Path Interconnect (QPI) fabric architecture, among other examples, can potentially be improved according to one or more of the principles described herein, among other examples. For example, the primary goal of PCIe is to enable components and devices from different vendors to interoperate in an open architecture, spanning multiple market segments; clients (desktop and mobile), servers (standard and enterprise), and embedded and communication devices. PCI express is a high performance general purpose I/O interconnect that is defined for use in a variety of future computing and communication platforms. Some PCI attributes (e.g., its usage model, load-store architecture and software interfaces) have been maintained by its revisions, while previous parallel bus implementations have been replaced by highly scalable full serial interfaces. The most recent versions of PCI express utilize improvements in point-to-point interconnects, switch-based technologies, and packetized protocols to achieve new levels of performance and features. Some of the advanced features supported by PCI express are power management, quality of service (QoS), hot plug/hot swap support, data integrity, and error handling. Although the primary discussion herein refers to a new High Performance Interconnect (HPI) architecture, aspects of the invention described herein may be applicable to other interconnect architectures, for example, PCIe-compliant architectures, QPI-compliant architectures, MIPI-compliant architectures, high performance architectures, or other known interconnect architectures.
Referring to fig. 1, an embodiment of a fabric comprised of point-to-point links interconnecting a set of components is shown.
The
In one embodiment,
Here,
The switch/
Graphics accelerator 130 may also be coupled to
Turning to fig. 2, an embodiment of a layered protocol stack is shown.
Packets may be used to communicate information between components. Packets may be formed in the transaction layer 205 and the data link layer 210 to carry information from the sending component to the receiving component. As the transmitted packets flow through the other layers, the packets are extended with additional information for processing the packets at these layers. On the receiving side, the reverse process occurs and the packet is transformed from its physical layer 220 representation to a data link layer 210 representation and finally (for transaction layer packets) to a form that can be processed by the transaction layer 205 of the receiving device.
In one embodiment, the transaction layer 205 may provide an interface between the processing cores of the device and the interconnect architecture (e.g., the data link layer 210 and the physical layer 220). In this regard, the primary responsibilities of the transaction layer 205 may include the packaging and unpacking of packets (i.e., transaction layer packets or TLPs). The transaction layer 205 may also manage credit-based flow control for TLPs. In some implementations, separate transactions, i.e., transactions where requests and responses are separated by time, may be utilized, allowing the link to carry other traffic as the target device collects data for the responses, among other examples.
Credit-based flow control may be used to implement virtual channels and networks using interconnect fabrics. In one example, the device may advertise an initial credit for each of the receive buffers in the transaction layer 205. An external device (e.g.,
In one embodiment, the four transaction address spaces may include a configuration address space, a memory address space, an input/output address space, and a message address space. The memory space transaction includes one or more of a read request and a write request to transfer data to or from a memory-mapped location. In one embodiment, memory space transactions can use two different address formats, e.g., a short address format (e.g., a 32-bit address) or a long address format (e.g., a 64-bit address). Configuration space transactions may be used to access configuration spaces of various devices connected to the interconnect. Transactions to the configuration space may include read requests and write requests. Message space transactions (or simply messages) may also be defined to support in-band communication between interconnect agents. Thus, in an example embodiment, the transaction layer 205 may package the packet header/
Referring quickly to fig. 3, an example embodiment of a transaction-level packet descriptor is shown. In one embodiment, the
According to one implementation, the local transaction identifier field 308 is a field generated by the requesting agent and may be unique to all outstanding requests that require completion for that requesting agent. Further, in this example,
The attributes field 304 specifies the nature and relationship of the transaction. In this regard, the attribute field 304 is potentially used to provide additional information that allows modification of default processing of transactions. In one embodiment, attribute field 304 includes a
In this example, the sort attributes
Returning to the discussion of fig. 2, link layer 210 (also referred to as data link layer 210) may serve as an intermediate stage between transaction layer 205 and physical layer 220. In one embodiment, it is the responsibility of the data link layer 210 to provide a reliable mechanism for exchanging Transaction Layer Packets (TLPs) between two components on a link. One side of the data link layer 210 accepts a TLP packaged by the transaction layer 205, applies a packet sequence identifier 211 (i.e., an identification number or a packet number), calculates and applies an error detection code (i.e., a CRC212), and submits the modified TLP to the physical layer 220 for transmission across physical to external devices.
In one example, physical layer 220 includes a logical sub-block 221 and an electronic block 222 to physically transmit packets to an external device. Here, the logical sub-block 221 is responsible for the "digital" function of the physical layer 220. In this regard, the logical sub-block may include a transmit portion for preparing outgoing information for transmission by the physical sub-block 222, and a receiver portion for identifying and preparing received information before passing it to the link layer 210.
The physical block 222 includes a transmitter and a receiver. The transmitter provides symbols from the logical sub-block 221, which the transmitter serializes and transmits to the external device. The receiver is provided with serialized symbols from an external device and transforms the received signal into a bit stream. The bit stream is deserialized and provided to logical sub-block 221. In one example embodiment, an 8b/10b transmission code is employed, in which ten-bit symbols are transmitted/received. Here, the special symbol is used to frame the packet with the
As stated above, although the transaction layer 205, the link layer 210, and the physical layer 220 are discussed with reference to a particular embodiment of a protocol stack (e.g., a PCIe protocol stack), the layered protocol stack is not so limited. Indeed, any layered protocol may be included/implemented and adapted for the features discussed herein. As an example, a port/interface represented as a layered protocol may include: (1) a first layer, i.e., a transaction layer, for packaging packets; a second layer for ordering packets, i.e., a link layer; and a third layer, i.e., a physical layer, for transmitting packets. As a specific example, a high performance interconnect layering protocol as described herein is used.
Referring next to FIG. 4, an example embodiment of a serial point-to-point architecture is shown. A serial point-to-point link may include any transmission path for transmitting serial data. In the illustrated embodiment, the link may include two low voltage differential drive signal pairs: a transmit
A transmission path refers to any path for transmitting data, such as a transmission line, a copper line, an optical line, a wireless communication channel, an infrared communication link, or other communication path. The connection between two devices (e.g.,
A differential pair may refer to two transmission paths, e.g.,
An accelerator link (IAL) or other technology (e.g., GenZ, CAPI) defines a universal memory interface that allows memory associated with a discrete device such as an accelerator to be used as coherent memory. In many cases, the discrete devices and associated memory may be connected cards, or in a separate chassis from the core processor(s). The result of introducing coherent memory associated with the device is that the device memory is not tightly coupled to the CPU or platform. Platform specific firmware is not expected to know the device details. For modularity and interoperability reasons, memory initialization responsibility must be divided fairly between platform-specific firmware and device-specific firmware/software.
The present disclosure defines a memory initialization procedure and architecture interface that allows device specific firmware/software to abstract device specific initialization steps from platform firmware. This provides the device vendor with significant flexibility as to which device-specific entity performs the memory initialization. Initialization may be performed by a dedicated microcontroller on the device, device pre-boot Unified Extensible Firmware Interface (UEFI) firmware, or a post-Operating System (OS) device driver.
The present disclosure uses an IAL-attached memory (ial.mem protocol) as an example implementation, but may also be extended to other technologies, such as those promulgated by the GenZ alliance or the CAPI or OpenCAPI specifications. The IAL builds on top of PCIe and adds support for coherent memory attachment. In general, however, the systems, devices, and programs described herein may use other types of input/output buses that facilitate attachment of coherent memory.
FIG. 5 is a schematic diagram of a processing system 500 including a connected accelerator according to an embodiment of the disclosure. Processing system 500 may include a host processor 501 and a connected device 530. Connected device 530 may be a separate device connected across the IAL-based interconnect or connected through another similar interconnect. Connected device 530 may be integrated within the same chassis as host processor 501 or may be housed in a separate chassis.
Host processor 501 may include a processor core 502 (labeled as CPU 502). Processor core 502 may include one or more hardware processors. The processor core 502 may be coupled to a memory module 505. Memory module 505 may include Double Data Rate (DDR) interleaved memory, such as dual in-line memory module DIMM 1506 and DIMM2508, but may include more memory and/or other memory types. Host processor 501 may include a memory controller 504 implemented in one or a combination of hardware, software, or firmware. The memory controller 504 may include logic to manage the flow of data to and from the host processor 501 and the memory module 505.
Connected device 530 may be coupled to host processor 501 across an interconnect. As an example, the connected device 530 may include an accelerator ACC 1532 and an ACC 2542. ACC 1532 may include a memory controller MC 1534 capable of controlling coherent memory ACC1_ MEM 536. The ACC 2542 may comprise a memory controller MC 2544 capable of controlling a coherent memory ACC2_ MEM 546. The connected device 530 may include additional accelerators, memory, and the like. ACC1_ MEM 536 and ACC2_ MEM546 may be coherent memories used by the host processor; likewise, the memory module 505 may also be a coherent memory. The ACCs 1_ MEM 536 and 2_ MEM546 may be or include host managed device memory (HDM).
Host processor 501 may include software modules 520 for performing one or more memory initialization processes. Software modules 520 may include an Operating System (OS)522, platform Firmware (FW)524, one or more OS drivers 526, and one or more EFI drivers 528. Software modules 520 may include logic embodied on a non-transitory machine-readable medium and may include instructions that when executed cause one or more software modules to initialize the coherent memories ACC1_ MEM 536 and ACC2_ MEM 546.
For example, the size of the coherent memories ACC1_ MEM 536 and ACC2_ MEM546 and the overall characteristics of the memories may be determined in advance by the platform firmware 524 during startup by standard hardware registers or using a specified vendor specific extended capability register (DVSEC). Platform firmware 524 maps device memories ACC1_ MEM 536 and ACC2_ MEM546 to a coherent address space. Device firmware or software 550 performs device memory initialization and signals platform firmware 524 and/or system software 520 (e.g., OS 522). Device firmware 550 then communicates the detailed memory characteristics to platform firmware 524 and/or system software 520 (e.g., OS 522) via software protocols.
Fig. 6 is a process flow diagram 600 for discovering attached coherent memory in accordance with an embodiment of the present disclosure. The following processes may be performed by platform firmware or other software modules at startup of a computing system. At startup, the platform FW may initialize 602 memory associated with a host processor Core (CPU).
The platform firmware will then discover the attached device and the memory capabilities of the memory associated with the attached device (referred to herein as the attached memory) (604). Mem enabled devices can implement host managed device memory (HDM), for example, for attached devices that support iel. The HDM memory is coherent with the host system. The platform firmware may determine how many host-managed device memory (HDM) ranges the device implements, and determine the size of each of the HDM ranges.
The platform firmware may use information provided by the device to determine the memory capabilities of the device. For example, an IAL protocol based link may expose connected PCIe devices or functions. PCIe devices/functions carry a block of specified vendor specific expansion capability (DVSEC) registers. An example of a DVSEC register block is provided in table 1.
TABLE 1 DVSEC register Block
Various HDM-related register fields are described below.
Flex capability: reporting device capabilities
Flex bus lock
Flex control
Flexible bus range size high (1-2 copies)
Flexible bus range Low (1-2 copies)
Flexible bus range base high (1-2 copies)
Flexible bus range radix low (1-2 copies)
Mem capable devices will access the host to address a, which points to its local HDM memory, if the following two equations are satisfied:
Memory_Base[63:20]<=(A<<20)<Memory_Base[63:20]+Memory_Size[63:20]
Memory_Active AND Mem_Enable=1
once the HDM range and memory size are determined, the platform firmware may map the HDM range to system memory space along with CPU attached memory (606). The platform firmware describes device memory size and locality information to the OS via Advanced Configuration and Power Interface (ACPI) static resource relation table (SRAT) and Heterogeneous Memory Attribute Table (HMAT) entries. The platform firmware combines the HMAT fragment table generated by the device EFI driver with the platform information and builds the acpihat table for consumption by the OS. Referring again to FIG. 5, FIG. 5 shows two accelerators, ACC 1532 and ACC 2542, attached to the CPU 502 via accelerator links 510 and 512, respectively. The accelerator links 510 and 512 may be IAL, GenZ, CAPI, or links based on other protocols. CPU 502 also has local memory controller 504, and DIMM 1506 and DIMM2508 are connected to CPU 502 across two channels.
To construct the ACPI HMAT, platform firmware 524 combines the information it has about the CPU 502 and various CPU links with the information it learns about accelerator memories 536 and 546 from accelerator EFI driver 528. Examples are provided below:
information known to the platform firmware:
DIMM 1506, DIMM2508 size 128 GB;
DDR read/write latency is 50 ns;
DDR read/write bandwidth is 20 GB/s/CH;
intel AL 510, 512 delay-40 ns;
intel AL 510, 512 bandwidth 30 GB/s;
intel AL 510, 512 topology.
Information exposed by ACC 1532 EFI driver via HMAT fragment:
acc1.mem 536 size 16 GB;
acc1.mem 536 read/write delay of 60 ns;
acc1.mem 536 read/write bandwidth is 80 GB/s.
Information exposed by ACC 2542 EFI driver via HMAT fragment:
ac2. mem546 size 8 GB;
ac2. mem546 read/write delay 60 ns;
ac2. mem546 read/write bandwidth is 80 GB/s.
Platform firmware 524 combines these data together to construct various ACPI tables. Examples of ACPI tables are shown below. The HMAT table consists of a system-local latency and bandwidth information structure and a memory subsystem address range structure.
TABLE 2 example SRAT
Range of
(_PXM)
_SB.PCI.ACC1
1
_SB.PCI.ACC2
2
TABLE 3 example memory subsystem Address Range Structure
PD-proximity domain; PPD ═ processor PD; MPD is memory PD.
TABLE 4 System locality delay and Bandwidth information Structure
The processor and local memory are part of the same non-uniform memory access (NUMA) node (referred to as a "proximity domain" in the ACPI specification). Each accelerator is associated with a separate memory-only NUMA node. These are listed in the ACPI SRAT table. If the memory is not initialized at OS switch, the size field is set to 0 and the NUMA node is marked as disabled. In this case, the OS driver is responsible for initializing the device memory and notifying the OS.
Similar to other PCIe devices, a _ PXM method may be used to identify the proximity between an accelerator PCIe function and an accelerator NUMA node.
The system BIOS calculates the CPU to accelerator memory latency by adding the intel AL latency and the local memory access latency. The link delay is known to the system BIOS and the accelerator EFI driver reports the accelerator side delay via the HMAT segment. The sample calculations are shown in row 0 and column 2 of the delay table.
The CPU to accelerator memory bandwidth is equal to the intel AL bandwidth or the local memory bandwidth, whichever is lower. Intel AL bandwidth is known to the system BIOS and the accelerator EFI driver reports accelerator side bandwidth via the HMAT fragment.
The platform firmware may indicate that the device memory is cacheable by the processor (608). The platform firmware may determine whether the attached memory is initialized (610). The DVSEC register block may include a field Mem _ HwInit _ Mode indicating a memory initialization Mode: if Mem _ HwInit _ Mode equals 1, the device memory has already been initialized and can be used. If Mem _ HwInit _ Mode is 0, then device memory initialization is performed (614). If MEM _ Active is 0, the attached memory can be accessed by host processor 502 as coherent memory, even though it cannot be used to store code/data.
Fig. 7 is a process flow diagram 700 for initializing attached coherent memory in accordance with an embodiment of the disclosure. The initialization process may be performed by a device EFI driver or an OS driver. Until an attached memory initialization occurs, the UEFI firmware tracks the memory range as "firmware reserved memory". The device is responsible for returning all 1 s on read and discarding writes to the HDM range.
First, the platform firmware determines that the attached memory is not initialized (e.g., by reading Mem _ HwInit _ Mode ═ 0 from the DVSEC field) (702). The device driver may initialize the attached memory using device-specific information for an initialization process inherent to the device driver or read from a device or system field (704). The driver may indirectly cause the Mem _ Active field to be set (706). The driver indirectly causes the Mem _ Active field to be set by setting one or more other fields that prompt the device hardware to set the Mem _ Active field.
The driver may then notify the platform firmware that the attached memory is available for caching using a software call (708). For example, the EFI driver may call the setmemoryspaceattatributes () function or equivalent function defined in the UEFI platform initialization specification. As another example, the OS driver may call ACPI _ DSM (device specific method). The ACPI _ DSM process may notify the OS memory manager about the additional available memory via mechanisms and procedures such as a dynamic hardware partitioning protocol for the OS driver.
In some implementations of an embodiment, the EFI driver may perform attached memory initialization. The EFI driver may provide information to the platform firmware that allows the platform firmware to construct a memory map.
Fig. 8 illustrates a field programmable gate array (FGPA)800 in accordance with certain embodiments. In particular embodiments, the compression engine 108 may be implemented by the FPGA800 (e.g., the functionality of the compression engine 108 may be implemented by circuitry of the operating logic 804). An FPGA may be a semiconductor device that includes configurable logic. The FPGA may be programmed via a data structure (e.g., a bit stream) having any suitable format that defines how the logic of the FPGA is configured. After the FPGA is manufactured, the FPGA may be reprogrammed any number of times.
In the depicted embodiment, FPGA800 includes
The
The communication controller 806 may enable the FPGA800 to communicate with other components of the computer system (e.g., a compression engine) (e.g., to receive commands to compress a data set). The memory controller 810 may enable the FPGA to read data (e.g., operands or results) from or write data to the memory of the computer system. In various embodiments, memory controller 810 may comprise a Direct Memory Access (DMA) controller.
Processor cores may be implemented in different ways, for different purposes, and in different processors. For example, implementations of such cores may include: 1) a generic ordered core intended for general purpose computing; 2) a high performance generic out-of-order core intended for general purpose computing; 3) dedicated cores intended primarily for graphics and/or scientific (throughput) computing. Implementations of different processors may include: 1) a CPU comprising one or more general purpose in-order cores intended for general purpose computing and/or one or more general purpose out-of-order cores intended for general purpose computing; and 2) coprocessors comprising one or more dedicated cores intended primarily for graphics and/or science (throughput). Such different processors result in different computer system architectures, which may include: 1) the coprocessor is on a chip separate from the CPU; 2) the coprocessor is on a separate die in the same package as the CPU; 3) the coprocessor is on the same die as the CPU (in which case such coprocessor is sometimes referred to as dedicated logic, e.g., integrated graphics and/or scientific (throughput) logic, or as a dedicated core); and 4) a system on a chip that may include the described CPU (sometimes referred to as application core(s) or application processor(s), coprocessors and additional functionality described above on the same die. An exemplary core architecture is described next, followed by an exemplary processor and computer architecture.
Fig. 9 is a block diagram of a processor 900 that may have more than one core, may have an integrated memory controller, and may have integrated graphics, in accordance with various embodiments. The solid line boxes in FIG. 9 show a processor 900 having a single core 902A, a
Thus, different implementations of the processor 900 may include: 1) a CPU, where
In various embodiments, a processor may include any number of processing elements, which may be symmetric or asymmetric. In one embodiment, a processing element refers to hardware or logic that supports software threads. Examples of hardware processing elements include: a thread unit, a thread slot, a thread, a processing unit, a context unit, a logical processor, a hardware thread, a core, and/or any other element capable of maintaining a state of a processor (e.g., an execution state or an architectural state). In other words, in one embodiment, a processing element refers to any hardware capable of being independently associated with code (e.g., software threads, operating systems, applications, or other code). A physical processor (or processor socket) typically refers to an integrated circuit, which potentially includes any number of other processing elements, such as cores or hardware threads.
A core may refer to logic located on an integrated circuit capable of maintaining an independent architectural state, where each independently maintained architectural state is associated with at least some dedicated execution resources. A hardware thread may refer to any logic located on an integrated circuit capable of maintaining an independent architectural state, wherein the independently maintained architectural states share access to execution resources. As can be seen, the boundaries between the naming of hardware threads and cores overlap when some resources are shared while others are dedicated to the architectural state. Often, however, the operating system views the cores and hardware threads as separate logical processors, where the operating system is able to schedule operations on each logical processor separately.
The memory hierarchy includes one or more levels of cache within the core, a set or one or more shared cache units 906, and an external memory (not shown) coupled to the set of integrated
In some embodiments, one or more of the cores 902A-N can be multi-threaded. The
With respect to the architectural instruction set, the cores 902A-N may be homogeneous or heterogeneous; that is, two or more of the cores 902A-N may be capable of executing the same instruction set, while other cores may be capable of executing only a subset of the instruction set or a different instruction set.
Fig. 10-14 are block diagrams of exemplary computer architectures. Other system designs and configurations known in the art for laptop computers, desktop computers, handheld PCs, personal digital assistants, engineering workstations, servers, network appliances, hubs, switches, embedded processors, Digital Signal Processors (DSPs), graphics devices, video game devices, set-top boxes, microcontrollers, cellular telephones, portable media players, handheld devices, and various other electronic devices are also suitable for performing the methods described in this disclosure. In general, various systems or electronic devices capable of containing a processor and/or other execution logic as disclosed herein are generally suitable.
Fig. 10 depicts a block diagram of a system 1000 according to one embodiment of the present disclosure. The system 1000 may include one or more processors 1010, 1015 coupled to a controller hub 1020. In one embodiment, the controller hub 1020 includes a Graphics Memory Controller Hub (GMCH)1090 and an input/output hub (IOH)1050 (which may be on separate chips or on the same chip); the GMCH 1090 includes a memory controller and a graphics controller coupled to a memory 1040 and a coprocessor 1045; IOH 1050 couples an input/output (I/O) device 1060 to GMCH 1090. Alternatively, one or both of the memory controller and the graphics controller are integrated within the processor (as described herein), the memory 1040 and the coprocessor 1045 are coupled directly to the processor 1010, and the controller hub 1020 is a single chip that includes the IOH 1050.
The optional nature of the additional processor 1015 is indicated by dashed lines in FIG. 10. Each processor 1010, 1015 may include one or more of the processing cores described herein and may be some version of the processor 900.
The memory 1040 may be, for example, a Dynamic Random Access Memory (DRAM), a Phase Change Memory (PCM), other suitable memory, or any combination thereof. The memory 1040 may store any suitable data, such as data used by the processors 1010, 1015 to provide the functionality of the computer system 1000. For example, data associated with executed programs or files accessed by processors 1010, 1015 may be stored in memory 1040. In various embodiments, memory 1040 may store data and/or sequences of instructions that are used or executed by processors 1010, 1015.
In at least one embodiment, the controller hub 1020 communicates with the processor(s) 1010, 1015 via a multi-drop bus such as a front-side bus (FSB), a point-to-point interface such as a Quick Path Interconnect (QPI), or similar connection 1095.
In one embodiment, the coprocessor 1045 is a special-purpose processor, such as a high-throughput MIC processor, a network or communication processor, compression and/or decompression engines, a graphics processor, a GPGPU, an embedded processor, or the like. In one embodiment, the controller hub 1020 may include an integrated graphics accelerator.
There may be various differences between the physical resources 1010, 1015 in terms of the range of metrics of merit including architectural characteristics, microarchitectural characteristics, thermal characteristics, power consumption characteristics, and the like.
In one embodiment, processor 1010 executes instructions that control data processing operations of a general type. Embedded within the instructions may be coprocessor instructions. Processor 1010 recognizes these coprocessor instructions as being of a type that should be executed by the attached coprocessor 1045. Thus, the processor 1010 issues these coprocessor instructions (or control signals representing coprocessor instructions) on a coprocessor bus or other interconnect to coprocessor 1045. Coprocessor(s) 1045 accepts and executes received coprocessor instructions.
Fig. 11 depicts a block diagram of a more specific first example system 1100, according to an embodiment of the disclosure. As shown in FIG. 11, multiprocessor system 1100 is a point-to-point interconnect system, and includes a first processor 1170 and a second processor 1180 coupled via a point-to-point interconnect 1150. Each of processors 1170 and 1180 may be some version of the processor. In one embodiment of the disclosure, processors 1170 and 1180 are processors 1110 and 1115, respectively, and coprocessor 1138 is coprocessor 1145. In another embodiment, processors 1170 and 1180 are respectively processor 1110 and coprocessor 1145.
Processors 1170 and 1180 are shown including Integrated Memory Controller (IMC) units 1172 and 1182, respectively. Processor 1170 also includes as part of its bus controller units point-to-point (P-P) interfaces 1176 and 1178; similarly, the second processor 1180 includes P-P interfaces 1186 and 1188. Processors 1170, 1180 may exchange information via a point-to-point (P-P) interface 1150 using P-P interface circuits 1178, 1188. As shown in FIG. 11, IMCs 1172 and 1182 couple the processors to respective memories, namely a memory 1132 and a memory 1134, which may be portions of main memory locally attached to the respective processors.
Processors 1170, 1180 may each exchange information with a chipset 1190 via individual P-P interfaces 1152, 1154 using point to point interface circuits 1176, 1194, 1186, 1198. Chipset 1190 may optionally exchange information with the coprocessor 1138 via a high-performance interface 1139. In one embodiment, the coprocessor 1138 is a special-purpose processor, such as a high-throughput MIC processor, a network or communication processor, compression and/or decompression engines, a graphics processor, a GPGPU, an embedded processor, or the like.
A shared cache (not shown) may be included in either processor or external to both processors but connected with the processors via a P-P interconnect, such that if either or both processors are placed in a low power mode, local cache information for the processors may be stored in the shared cache.
Chipset 1190 may be coupled to a first bus 1116 via an interface 1196. In one embodiment, first bus 1116 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI express bus or another third generation I/O interconnect bus, although the scope of the present disclosure is not so limited.
As shown in fig. 11, various I/O devices 1114 may be coupled to first bus 1116, along with a bus bridge 1118, which couples first bus 1116 to a second bus 1120. In one embodiment, one or more additional processors 1115, such as coprocessors, high-throughput MIC processors, GPGPU's, accelerators (e.g., graphics accelerators or Digital Signal Processing (DSP) units), field programmable gate arrays, or any other processor, are coupled to first bus 1116. In one embodiment, second bus 1120 may be a Low Pin Count (LPC) bus. Various devices may be coupled to second bus 1120 including, for example, a keyboard and/or mouse 1122, communication devices 1127 and a storage unit 1128 such as a disk drive or other mass storage device which may include instructions/code and data 1130, in one embodiment. Further, an audio I/O1124 may be coupled to the second bus 1120. Note that the present disclosure contemplates other architectures. For example, instead of the point-to-point architecture of FIG. 11, a system may implement a multi-drop bus or other such architecture.
Fig. 12 depicts a block diagram of a more specific
Fig. 12 illustrates that the
Fig. 13 depicts a block diagram of a SoC 1300 in accordance with an embodiment of the present disclosure. Furthermore, the dashed box is an optional feature on a more advanced SoC. In fig. 13, interconnect cell(s) 1302 are coupled to: an application processor 1310 that includes a set of one or more cores 1002A-N and shared cache unit(s) 1006; a system agent unit 1010; bus controller unit(s) 1016; integrated memory controller unit(s) 1014; a set or one or more coprocessors 1320 which may include integrated graphics logic, an image processor, an audio processor, and a video processor; a Static Random Access Memory (SRAM) unit 1330; a Direct Memory Access (DMA) unit 1332; and a display unit 1340 for coupling to one or more external displays. In one embodiment, the coprocessor(s) 1320 include a special-purpose processor, such as a network or communication processor, compression and/or decompression engine, GPGPU, a high-throughput MIC processor, embedded processor, or the like.
In some cases, an instruction transformer may be used to transform instructions from a source instruction set to a target instruction set. For example, the instruction transformer may transform (e.g., using static binary translation, including dynamic binary translation of dynamic compilation), morph, emulate, or otherwise transform an instruction into one or more other instructions to be processed by the core. The instruction translator may be implemented in software, hardware, firmware, or a combination thereof. The instruction translator may be on the processor, off the processor, or partially on the processor and partially off the processor.
FIG. 14 is a block diagram comparing the conversion of binary instructions in a source instruction set to binary instructions in a target instruction set using a software instruction converter, according to an embodiment of the disclosure. In the illustrated embodiment, the instruction translator is a software instruction translator, but alternatively, the instruction translator may be implemented in software, firmware, hardware, or various combinations thereof. Fig. 14 shows that a program in the form of a high-
A design may go through various stages, from creation to simulation to fabrication. The data representing the design may represent the design in a variety of ways. First, as is useful in simulations, the hardware may be represented using a Hardware Description Language (HDL) or another functional description language. Additionally, a circuit level model with logic and/or transistor gates may be generated at some stages of the design process. Furthermore, most designs, at some stage, reach a level of data representing the physical placement of various devices in the hardware model. In the case where conventional semiconductor fabrication techniques are used, the data representing the hardware model may be the data specifying the presence or absence of various features on different mask layers for masks used to produce the integrated circuit. In some implementations, such data may be stored in a database file format such as graphic data system ii (gds ii), Open Art System Interchange Standard (OASIS), or similar format.
In some implementations, the software-based hardware model as well as the HDL and other functional description language objects can include Register Transfer Language (RTL) files, among other examples. Such objects may be machine parsable such that a design tool may accept an HDL object (or model), parse the HDL object for attributes of the hardware being described, and determine physical circuitry and/or on-chip layout from the object. The output of the design tool may be used to manufacture the physical device. For example, the design tool may determine from the HDL object the configuration of various hardware and/or firmware elements, such as bus widths, registers (including size and type), memory blocks, physical link paths, fabric topology, and other attributes of the system implemented to implement modeling in the HDL object. The design tools may include tools for determining the topology and fabric configuration of a system on a chip (SoC) and other hardware devices. In some instances, HDL objects may be used as a basis for developing models and design files that may be used by a manufacturing facility to manufacture the described hardware. In practice, the HDL objects themselves may be provided as input to the manufacturing system software to cause the manufacture of the described hardware.
In any representation of the design, data representing the design may be stored in any form of a machine-readable medium. A memory such as a disk or a magnetic or optical storage device may be such a machine readable medium: for storing information transmitted via optical or electrical waves that are modulated or otherwise generated to transmit such information. When an electrical carrier wave indicating or carrying the code or design is transmitted, to the extent that copying, buffering, or re-transmission of the electrical signal is performed, a new copy is made. Accordingly, a communication provider or a network provider may store an article (e.g., information encoded as a carrier wave embodying techniques of embodiments of the present disclosure) on a tangible, machine-readable medium, at least temporarily.
In various embodiments, a medium storing a representation of a design may be provided to a manufacturing system (e.g., a semiconductor manufacturing system capable of manufacturing integrated circuits and/or related components). The design representation may instruct the system to manufacture a device capable of performing any combination of the functions described above. For example, the design representation may indicate the system as to which components are to be manufactured, how the components should be coupled together, where the components should be placed on the device, and/or as to other suitable specifications related to the device to be manufactured.
Accordingly, one or more aspects of at least one embodiment may be implemented by representative instructions stored on a machine-readable medium which represent various logic within a processor, which when read by a machine, cause the machine to fabricate logic to perform the techniques described herein. Such representations, often referred to as "IP cores," may be stored on a non-transitory tangible machine-readable medium and provided to various customers or manufacturing facilities for loading into the fabrication machines that make the logic or processor.
Embodiments of the mechanisms disclosed herein may be implemented in hardware, software, firmware, or a combination of these implementations. Embodiments of the disclosure may be implemented as computer programs or program code executing on programmable systems comprising at least one processor, a storage system (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device.
Program code, such as code 1130 shown in FIG. 11, may be applied to input instructions to perform the functions described herein and generate output information. The output information may be applied to one or more output devices in a known manner. For purposes of this application, a processing system includes any system having a processor such as a Digital Signal Processor (DSP), a microcontroller, an Application Specific Integrated Circuit (ASIC), or a microprocessor.
The program code may be implemented in a high level procedural or object oriented programming language to communicate with a processing system. The program code can also be implemented in assembly or machine language, if desired. Indeed, the scope of the mechanisms described herein is not limited to any particular programming language. In various embodiments, the language may be a compiled or interpreted language.
The embodiments of methods, hardware, software, firmware, or code set forth above may be implemented via instructions or code stored on a machine-accessible, machine-readable, computer-accessible, or computer-readable medium capable of being executed by (or otherwise accessed by) a processing element. A machine-accessible/readable medium includes any mechanism that provides (i.e., stores and/or transmits) information in a form readable by a machine (e.g., a computer or electronic system). For example, a machine-accessible medium includes Random Access Memory (RAM), such as static RAM (sram) or dynamic RAM (dram); a ROM; a magnetic or optical storage medium; a flash memory device; an electrical storage device; an optical storage device; an acoustic storage device; other forms of storage devices and the like for holding information received from transitory (propagating) signals (e.g., carrier waves, infrared signals, digital signals), which are distinguished from non-transitory media from which information may be received.
Instructions for programming logic to perform embodiments of the disclosure may be stored within a memory (e.g., DRAM, cache, flash memory, or other storage) in a system. Further, the instructions may be distributed via a network or by other computer readable media. Thus, a machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer), including, but not limited to, floppy diskettes, optical disks, compact disc read-only memories (CD-ROMs), and magneto-optical disks, read-only memories (ROMs), Random Access Memories (RAMs), erasable programmable read-only memories (EPROMs), electrically erasable programmable read-only memories (EEPROMs), magnetic or optical cards, flash memory, or a tangible machine-readable storage device for transmitting information over the internet via electrical, optical, acoustical or other forms of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.). Thus, a computer-readable medium includes any type of tangible machine-readable medium suitable for storing or transmitting electronic instructions or information in a form readable by a machine (e.g., a computer).
Logic may be used to implement any of the functions of the various components. "logic" may refer to hardware, firmware, software, and/or combinations of each for performing one or more functions. As an example, logic may include hardware, such as a microcontroller or processor, associated with a non-transitory medium for storing code adapted to be executed by the microcontroller or processor. Thus, in one embodiment, reference to logic refers to hardware specifically configured to identify and/or execute code to be retained on non-transitory media. Further, in another embodiment, the use of logic refers to a non-transitory medium including code that is particularly adapted to be executed by a microcontroller to perform predetermined operations. And as may be inferred, in yet another embodiment, the term logic (in this example) may refer to a combination of hardware and non-transitory media. In various embodiments, logic may include a microprocessor or other processing element operable to execute software instructions, discrete logic such as an Application Specific Integrated Circuit (ASIC), a programmed logic device such as a Field Programmable Gate Array (FPGA), a memory device containing instructions, a combination of logic devices (e.g., as may be found on a printed circuit board), or other suitable hardware and/or software. Logic may include one or more gates or other circuit components, which may be implemented by, for example, transistors. In some embodiments, logic may also be fully embodied as software. The software may be embodied as a software package, code, instructions, instruction sets, and/or data recorded on a non-transitory computer-readable storage medium. Firmware may be embodied as code, instructions or instruction sets and/or data that are hard-coded (e.g., nonvolatile) in a memory device. Often, logical boundaries shown as separate generally vary and potentially overlap. For example, the first logic and the second logic may share hardware, software, firmware, or a combination thereof, while potentially retaining some separate hardware, software, or firmware.
In one embodiment, use of the phrases "for" or "configured to" refer to arranging, placing together, manufacturing, providing for sale, importing and/or designing a device, hardware, logic, or element to perform a specified or determined task. In this example, if a device or element thereof is designed, coupled, and/or interconnected to perform a specified task, the device or element thereof that is not operated is still "configured to" perform the specified task. As a purely illustrative example, a logic gate may provide a 0 or a 1 during operation. But the logic gate "configured to" provide the enable signal to the clock does not include that each potential logic gate may provide a 1 or a 0. Instead, the logic gates are coupled in a manner that during operation a 1 or 0 output is used to enable the clock. It is again noted that the use of the term "configured to" does not require operation, but instead focuses on the hidden state of the device, hardware, and/or elements where the device, hardware, and/or elements are designed to perform specific tasks when the device, hardware, and/or elements are operating.
Furthermore, in one embodiment, use of the phrase "capable of/capable of" and/or "operable to" refers to some apparatus, logic, hardware, and/or elements being designed in such a way that: such that the apparatus, logic, hardware, and/or elements may be used in a specified manner. Note that the use for, capable of, or operable in one embodiment as above refers to a hidden state of a device, logic, hardware, and/or element, where the device, logic, hardware, and/or element is not operational but is designed in such a way as to enable use of the device in a specified manner.
A value, as used herein, includes any known representation of a number, state, logic state, or binary logic state. Often, the use of logic levels, logic values, or logical values is also referred to as the use of 1's and 0's, which represent only binary logic states. For example, a 1 refers to a high logic level and a 0 refers to a low logic level. In one embodiment, a memory cell, such as a transistor or flash memory cell, can hold a single logic value or multiple logic values. However, other representations of values have been used in computer systems. For example, tens of decimal may also be represented as a binary value of 1010 and the hexadecimal letter A. Thus, a value includes any representation of information that can be stored in a computer system.
Further, a state may be represented by a value or a portion of a value. As an example, a first value (e.g., a logical one) may represent a default or initial state, while a second value (e.g., a logical zero) may represent a non-default state. Additionally, in one embodiment, the terms reset and set refer to a default value or state and an updated value or state, respectively. For example, the default value potentially comprises a high logic value (i.e., reset) and the updated value potentially comprises a low logic value (i.e., set). Note that any number of states may be represented using any combination of values.
The systems, methods, computer program products, and apparatus may include one or a combination of the following examples:
example 1 is an apparatus, comprising: a processor core; a processor memory for caching data; and platform firmware to: determining that a device is connected to a processor core across a memory link interface; determining that the device includes an attached memory unit; determining a range of at least a portion of attached memory cells available to a processor core; mapping a range of the portion of the attached memory unit to a processor memory; and wherein: the processor core is to cache data using the range of the portion of the particular attached memory unit and the processor memory.
Example 2 may include the subject matter of example 1, further comprising a memory link interface to couple the processor core with an attached memory unit, wherein the memory link interface comprises a link conforming to one of an Intel Accelerator Link (IAL) protocol, a GenZ-based protocol, or a CAPI-based protocol.
Example 3 may include the subject matter of any of examples 1-2, the platform firmware to receive a capability register block from the attached device, and wherein the platform firmware to determine a range of at least a portion of the attached memory from the capability register block.
Example 4 may include the subject matter of example 3, wherein the capability register block is a specified vendor specific extended capability (DVSEC) register block.
Example 5 may include the subject matter of any of examples 1-4, the platform firmware to construct one or more Advanced Configuration and Power Interface (ACPI) tables using information received from the device or information received from an Extensible Firmware Interface (EFI) driver associated with the device.
Example 6 may include the subject matter of example 5, wherein the one or more ACPI tables comprise one or both of a static resource relationship table (SRAT) or a Heterogeneous Memory Attribute Table (HMAT).
Example 7 may include the subject matter of example 6, wherein the HMAT table comprises a system-local latency and bandwidth information structure and a memory subsystem address range structure.
Example 8 may include the subject matter of example 7, wherein the system-local latency and bandwidth information structure comprises bandwidth and latency information of attached memory.
Example 9 may include the subject matter of example 7, wherein the memory subsystem address range structure includes fields to indicate a System Physical Address (SPA) cardinality of the attached memory and a length of attached memory space available to the processor core.
Example 10 may include the subject matter of examples 1-8, the platform firmware to determine that the attached memory is not initialized; and causes the attached memory to be initialized.
Example 11 is at least one non-transitory machine-accessible storage medium having instructions stored thereon, the instructions, when executed on a machine, cause the machine to: determining, by a driver associated with the device, that there is a device connected across the link to the host processor; determining that the device includes a coherent memory; providing one or more attributes about the coherent memory to a host processor to map the coherent memory to system memory; determining that a coherent memory is not initialized; and initializing the coherent memory for use by a host processor for storing data using the coherent memory and a system memory of the device.
Example 12 may include the subject matter of example 11, wherein the instructions comprise an Extensible Firmware Interface (EFI) driver associated with the device.
Example 13 may include the subject matter of example 12, wherein the instructions, when executed, cause the machine to provide one or more bandwidth or latency attributes of the coherent memory to the platform firmware to construct one or more Advanced Configuration and Power Interface (ACPI) tables.
Example 14 may include the subject matter of example 13, wherein the one or more ACPI tables comprise one or more of a Heterogeneous Memory Attribute Table (HMAT) or a non-volatile dual in-line memory module firmware interface table (NFIT).
Example 15 may include the subject matter of any one of examples 11-14, wherein the instructions, when executed, cause the machine to notify the platform firmware that the coherent memory is available using a software call defined in an EFI-based initialization protocol.
Example 16 may include the subject matter of any of examples 11-15, wherein the instructions comprise an operating system driver associated with the device.
Example 17 may include the subject matter of example 16, wherein the instructions, when executed, cause the machine to notify the platform firmware that the coherent memory is available using one or more device-specific methods determined from the operating system driver.
Example 18 is a system, comprising: a host processor comprising one or more hardware processor cores; a system memory for caching data; a device connected to the host processor by a link; a coherent memory associated with the device; and platform firmware to: discovering devices at system startup; determining one or more attributes of the coherent memory; and mapping at least a portion of the coherent memory to an address space with the system memory; and the host processor is used to cache data using the system memory and the coherent memory.
Example 19 may include the subject matter of example 18, wherein the apparatus comprises accelerator circuitry, implemented at least in part in hardware, to provide processing acceleration for the host processor.
Example 20 may include the subject matter of example 19, further comprising an accelerator link to couple the accelerator to the host processor.
Example 21 may include the subject matter of example 20, wherein the accelerator link is compliant with one of an Intel Accelerator Link (IAL) based protocol, a GenZ based protocol, or a CAPI based protocol.
Example 22 may include the subject matter of any one of examples 18-21, wherein the coherent memory comprises host managed device memory (HDM).
Example 23 may include the subject matter of any one of examples 18-22, the platform firmware to: receiving a capability register block from a device, the capability register block indicating one or more attributes of a coherent memory; determining a memory size and address range available to a host processor from a block of capability registers; and mapping the memory size and address range available to the host processor to the address space along with the system memory.
Example 24 may include the subject matter of example 23, the platform firmware to: constructing one or more advanced configuration and power interface tables based on the attributes received in the capability register block; and mapping the memory size and address range to an address space using one or more ACPI tables.
Example 25 may include the subject matter of examples 18-24, further comprising a device driver associated with the device, the device driver to: initializing a coherent memory; and providing one or more Advanced Configuration and Power Interface (ACPI) table data fragments for the coherent memory to the platform firmware to facilitate constructing the one or more ACPI tables.
Example 26 is a method for configuring a coherent memory, the method performed by platform firmware of a host processor, the method comprising: determining that a device is connected to a host processor across a memory link interface; determining that the device includes an attached memory unit; determining a range of at least a portion of attached memory units available to a host processor; mapping a range of the portion of the attached memory unit to a processor memory; and wherein the host processor caches the data using the processor memory and the range of the portion of the particular attached memory unit.
Example 27 is an apparatus, comprising: means for determining that a device is connected to a host processor across a memory link interface; means for determining that the device includes an attached memory unit; means for determining a range of at least a portion of attached memory units available to a host processor; means for mapping a range of the portion of the attached memory unit to a processor memory; and means for causing the host processor to cache data using the range of the portion of the particular attached memory unit and the processor memory.
- 上一篇:一种医用注射器针头装配设备
- 下一篇:用于将高速缓存行降级到共享高速缓存的技术