Inter-device processing system with cache coherency

文档序号:1963669 发布日期:2021-12-14 浏览:11次 中文

阅读说明:本技术 具有高速缓存一致性的设备间处理系统 (Inter-device processing system with cache coherency ) 是由 段立德 陈彦光 刘宏宇 郑宏忠 于 2021-08-13 设计创作,主要内容包括:当在设备间处理系统内的设备之间共享数据的高速缓存行时,所述设备通过利用所述设备中的一个中的目录来维持所述系统的最后级别高速缓存中的数据一致性,所述目录跟踪所述系统的最后级别高速缓存中的存储器地址的一致性协议状态。(When sharing a cache line of data between devices within an inter-device processing system, the devices maintain data coherency in a last level cache of the system by utilizing a directory in one of the devices that tracks a coherency protocol state of a memory address in the last level cache of the system.)

1. A processing system, the processing system comprising:

a first device having a first cache, the first device outputting a first request to read requested data when the first cache of the first device does not have a valid version of the requested data, the first request having a memory address;

a second device coupled to the first device, the second device having a coherency directory, the second device checking the coherency directory in response to the first request and outputting a first fetch instruction to fetch data when the coherency directory indicates that none of the devices has a cache line holding a valid copy of the requested data, wherein the first request is output by the first device to only the second device; and

a third device coupled to the first device and the second device, the third device having a third cache and a non-cache memory, the third device outputting the request data from the non-cache memory to only the second device in response to the first fetch instruction,

wherein the second device forwards the request data to the first device and updates a coherency state of the memory addresses in the coherency directory from an invalid state to a shared state to indicate to the first device to share a copy of the request data.

2. The processing system of claim 1, wherein the first device updates a coherency state from an invalid state to a shared state to indicate that the first device shares a valid copy of the requested data.

3. The processing system of claim 1, further comprising:

a fourth device having a fourth cache, the fourth device outputting only a second request to the second device to read the requested data when the fourth cache of the fourth device does not have a valid version of the requested data, the second request with the memory address.

4. The processing system of claim 3, wherein the second device checks the coherency directory in response to the second request and outputs a second fetch instruction to the first device to fetch the requested data when the coherency directory indicates that the first cache of the first device has a cache line holding a valid copy of the requested data.

5. The processing system of claim 4, wherein:

the first device outputting the request data to the second device in response to the second fetch instruction; and is

The second device forwards the request data to the fourth device,

wherein the fourth device changes coherency state from an invalid state to a shared state after receiving the request data.

6. The processing system of claim 1, wherein:

the first device outputting only a second request to the second device to write new data to a cache line in the first cache having the memory address; and

the second device checks the coherency directory in response to the second request and updates the coherency state of the memory address in the coherency directory from a shared state to a modified state to instruct the first device to modify the requested data when the coherency directory indicates that only the first device has a cache line holding a valid copy of the requested data.

7. The processing system of claim 5, wherein:

the fourth device outputting only a third request to the second device to write new data to the cache line in the fourth cache having the memory address; and

the second device checks the coherency directory in response to the third request and updates the coherency state of the memory address in the coherency directory from a shared state to a modified state to instruct the fourth device to modify the requested data when the coherency directory indicates that only the first device has a cache line holding a valid copy of the requested data.

8. The processing system of claim 7, wherein the second device outputs an invalid message to the first device when the fourth device modifies the requested data.

9. A method of operating an inter-device processing system, the method comprising:

receiving a first request with a memory address to read requested data when a cache of a local device does not have a valid version of the requested data;

checking a coherency directory with a host device in response to the first request, and when the coherency directory indicates that none of the devices has a cache line holding a valid copy of the requested data:

determining a home device associated with the memory address from the coherence directory, and

outputting a fetch instruction to the home device to fetch the requested data from non-cache memory, the first request addressed only to the host device, the fetch instruction addressed only to the home device;

receiving the request data from the home device;

updating a coherency state of the memory address in the coherency directory from an invalid state to a shared state; and

forwarding the request data to the local device.

10. The method of claim 9, further comprising accepting a second request with the memory address from a remote device to read the requested data when the remote device does not have a valid version of the requested data.

11. The method of claim 10, further comprising checking the coherence directory in response to the second request, and outputting a second fetch instruction to the local device to fetch the requested data when the coherence directory indicates that the cache of the local device has a cache line holding a valid copy of the requested data.

12. The method of claim 11, wherein:

the local device outputting the request data to the host device in response to the second fetch instruction;

the host device forwarding the request data to the remote device; and

the remote device changes a coherency state from an invalid state to a shared state after receiving the request data.

13. The method of claim 9, further comprising:

receiving a second request with the memory address from the local device to write new data to a cache line in a cache of the local device; and

the coherency directory is checked in response to the second request and, when the coherency directory indicates that only the local device has a cache line holding a valid copy of the requested data, the coherency state of the memory address in the coherency directory is updated from a shared state to a modified state to indicate to the local device to modify the requested data.

14. The method of claim 12, further comprising:

receiving a third request from the remote device with the memory address to write new data to a cache line in a cache of the remote device; and

checking the coherency directory in response to the third request, and updating the coherency state of the memory address in the coherency directory from a shared state to a modified state to instruct the remote device to modify the requested data when the coherency directory indicates that only the first device has a cache line holding a valid copy of the requested data.

15. The method of claim 14, further comprising outputting an invalidation message to the local device when the remote device is to modify the requested data.

Technical Field

The present invention relates to an inter-device processing system, and more particularly, to an inter-device processing system with cache coherency.

Background

An inter-device processing system is a system that includes a host processor and a number of special-purpose devices, such as a Graphics Processor Unit (GPU), a Field Programmable Gate Array (FPGA), and a Solid State Device (SSD), coupled together by an external bus, such as a peripheral component interconnect express (PCIe) bus.

In addition, the host processor and the dedicated device each have memory, which together form the total memory space of the system. For example, the memory space extending from a to E can include memory ranges a to B in the case of a host processor, memory ranges B to C in the case of a GPU, memory ranges C to D in the case of an FPGA, and memory ranges D to E in the case of an SSD. Many specialized devices include level one (L1) caches, level two (L2) caches, and main memory.

Further, the host processor and the dedicated device share and modify data between each other. For example, the host processor can access and use or modify data stored in the memory space of the GPU, FPGA, and SSD, while the GPU can access and use or modify data stored in the memory space of the host processor, FPGA, and SSD.

When sharing data among many devices, it is important to maintain data consistency, i.e., to ensure that different copies of the data are the same. The PCIe protocol includes semantics (operand-instruction names) for transferring data from, for example, a GPU to a host processor or from a host processor to a GPU.

To maintain consistency with the PCIe protocol, a programmer must track where valid and invalid data is stored when writing code to ensure that any copies of the data are the same. Thus, one drawback of the PCIe approach is that writing code, such as multi-threaded programs, is labor intensive due in part to the time required to monitor the location of valid data.

Further, when transferring data from the L2 cache of one device to the L2 cache of another device, the minimum amount of data that can be transferred is a page of data, which is typically equal to 64 cache lines (4 KB). Therefore, another disadvantage is the excessive bus traffic in case 64 cache lines (one page) have to be transferred when only a few cache lines need to be transferred.

Computing express links (CXLs) are PCIe-based methods of communicating between a host processor and many special-purpose devices with sharable L2 cache memory. However, developing methods to maintain L2 cache coherency among various devices is dependent on the programmer.

Therefore, a method of maintaining cache coherency between the host processor's L2 cache and the private device is needed.

Disclosure of Invention

The present invention simplifies programming and reduces bus traffic required to transfer cache lines between devices in an inter-device processing system. The processing system of the present invention includes a first device having a first cache. The first device outputs a first request to read requested data when a first cache of the first device does not have a valid version of the requested data. The processing system also includes a second device coupled to the first device. The second device has a coherence directory. The second device checks a coherency directory in response to the first request and outputs a first fetch instruction to fetch data when the coherency directory indicates that none of the devices has a cache line holding a valid copy of the requested data. The first request is output by the first device to only the second device. The processing system also includes a third device coupled to the first device and the second device. The third device has a third cache and a non-cache memory. The third device outputs the request data from the non-cache memory to only the second device in response to the first fetch instruction. The second device forwards the request data to the first device and updates a coherency state of a memory address in a coherency directory from an invalid state to a shared state to indicate to the first device to share a copy of the request data.

The invention also includes a method for operating an inter-device processing system. The method includes receiving a first request with a memory address to read requested data when a cache of a local device does not have a valid version of the requested data. The method also includes checking a coherency directory with the host device in response to the first request, and when the coherency directory indicates that none of the devices has a cache line holding a valid copy of the requested data: a home device associated with the memory address is determined from the coherence directory, and a fetch instruction is output to the home device to fetch the requested data from the non-cache memory. The first request is addressed only to the second device. The fetch instruction is addressed only to the home device. The method further comprises the following steps: receiving request data from a home device; updating a coherency state of a memory address in a coherency directory from an invalid state to a shared state; and forwarding the requested data to the local device.

The present invention also includes a non-transitory computer-readable storage medium having program instructions embedded therein, which when executed by one or more processors of a device, cause the device to perform a method of operating an inter-device processing system. The method includes receiving a first request with a memory address to read requested data when a cache of a local device does not have a valid version of the requested data. The method also includes checking a coherency directory with the host device in response to the first request, and when the coherency directory indicates that no device has a cache line with a valid copy of the requested data: a home device associated with the memory address is determined from the coherence directory, and a fetch instruction is output to the home device to fetch the requested data from the non-cache memory. The first request is addressed only to the second device. The fetch instruction is addressed only to the home device. The method further comprises the following steps: receiving request data from a home device; updating a coherency state of a memory address in a coherency directory from an invalid state to a shared state; and forwarding the requested data to the local device.

A better understanding of the features and advantages of the present invention will be obtained by reference to the following detailed description that sets forth illustrative embodiments, in which the principles of the invention are utilized, and the accompanying drawings.

Drawings

The accompanying drawings are included to provide a further understanding of the present application and are incorporated in and constitute a part of this application. The exemplary embodiments of the present application and their description are used to illustrate the present application and do not constitute limitations of the present application.

FIG. 1 is a block diagram illustrating an example of an inter-device processing system 100 in accordance with the present invention.

FIG. 2 is a flow diagram illustrating an example of a consistency update to processing system 100 in accordance with the present invention.

Fig. 3 is a timing diagram further illustrating an example of the operation of the processing system 100 in accordance with the present invention.

FIG. 4 is a block diagram illustrating an example of a consistency update to the processing system 100 in accordance with the present invention.

Fig. 5 is a timing diagram further illustrating an example of the operation of processing system 100 in accordance with the present invention.

FIG. 6 is a block diagram illustrating an example of an update to processing system 100 in accordance with the present invention.

Fig. 7 is a timing diagram further illustrating an example of the operation of processing system 100 in accordance with the present invention.

FIG. 8 is a block diagram illustrating an example of an update to processing system 100 in accordance with the present invention.

Fig. 9 is a timing diagram further illustrating an example of the operation of processing system 100 in accordance with the present invention.

FIG. 10 is a flow chart illustrating an example of a method 1000 of operating an inter-device processing system in accordance with the present invention.

Detailed Description

FIG. 1 shows a block diagram illustrating an example of an inter-device processing system 100 in accordance with the present invention. As shown in fig. 1, the processing system 100 includes a host processor 110, a first special purpose device 112, such as a Graphics Processor Unit (GPU), a second special purpose device 114, such as a Field Programmable Gate Array (FPGA), and a third special purpose device 116, such as a Solid State Device (SSD).

In addition, processing system 100 also includes an external bus 118, such as a peripheral component interconnect express (PCIe) bus, that couples host processor 110, first dedicated device 112, second dedicated device 114, and third dedicated device 116 together. The bus 118 can be implemented using any suitable electrical, optical, or wireless technology.

Host processor 110 has a memory 120 that includes a main memory 122 and a cache memory 124. The cache memory 124, in turn, includes a number of levels, including one or more Lower Levels (LL)124-1 and a last level 124-2. Similarly, first dedicated device 112, second dedicated device 114, and third dedicated device 116 have memories 130, 140, and 150, respectively.

Memory 130 includes main memory 132 and cache memory 134. The cache memory 134 includes a number of levels, including one or more lower levels 134-1 and a last level 134-2. Memory 140 includes main memory 142 and cache memory 144. The cache memory 144 includes a number of levels, including one or more lower levels 144-1 and a last level 144-2. Memory 150 includes main memory 152 and cache memory 154. Cache memory 154 includes a number of levels, including one or more lower levels 154-1 and a final level 154-2.

In memory with two cache levels, the lower level has an L1 cache, whereas the last level has an L2 cache coupled to an L1 cache and main memory in a conventional manner. In memory having three cache levels, the lower level has an L1 cache and an L2 cache coupled together in a conventional manner, whereas the last level has an L3 cache coupled to an L2 cache and main memory in a conventional manner.

In memory having four cache levels, the lower level has an L1 cache, an L2 cache, and an L3 cache coupled together in a conventional manner, whereas the last level has an L4 cache coupled to an L3 cache and main memory in a conventional manner. By way of example, an L1 cache typically stores 50KB of data, whereas an L2 cache typically stores 500KB of data and main memory typically stores 10GB of data. Other cache and main memory sizes can also be used.

As further shown in FIG. 1, each of the last level caches 124-2, 134-2, 144-2, 154-2 has a number of cache lines, where each cache line includes a memory address, a modified-shared-invalid (MSI) cache coherency indicator, and data. Each cache line can include additional entries as needed. In the MSI protocol, each cache line is tagged with one of three different coherency states, "M" for modified, "S" for shared, and "I" for invalid.

When a cache line is marked with an "M," the data in the cache line has been modified and the cache line has a unique valid copy. When a cache line is marked with an "S," the data in the cache line is one of many unmodified copies. In addition, data in a cache line cannot be modified while in the S state. When a cache line is marked with an "I," the data in the cache line is invalid. Various extensions of the MSI protocol, such as MESI and MOSI, and other protocols, can be used interchangeably.

As further shown in FIG. 1, host processor 110 additionally includes a number of CPU cores 128 coupled to corresponding lower level caches 124-1. Four cores are shown for illustration purposes only. Other numbers of cores can be used alternatively. Similarly, first, second and third specialized devices 112, 114 and 116 have processors 138, 148 and 158 coupled to lower level caches 134-1, 144-1 and 154-1, respectively.

As additionally shown in FIG. 1, host processor 110 also includes a home agent 160 with specialized processing and a memory that maintains a coherence directory 162. Directory 162 includes a list of memory addresses and, for each memory address, a memory-address-home indicator, an MSI cache coherency indicator, and a pointer. Each memory address in coherency directory 162 can include additional entries as needed, but does not store the associated request data.

The memory space is divided such that, for example, a first address range is stored in host main memory 122, a second address range is stored in main memory 132 of dedicated device 112, a third address range is stored in memory 142 of dedicated device 114, and a fourth address range is stored in main memory 152 of dedicated device 116.

The memory-address-home indicator, in turn, identifies the main memory that includes the memory address. For example, a memory address of a memory address located within the address range of dedicated device 114 is attributed to dedicated device 114. Optionally, to save memory space, the memory-address-home indicator can be omitted, and the home agent 160 can calculate memory address home from a given memory address. In addition, when the cache line of the last level cache stores a valid copy of the data, the pointer in coherency directory 162 identifies the cache with the valid copy.

In operation, host processor 110 and specialized devices 112, 114, and 116 share and modify data between each other while maintaining cache coherency. When a processor (core 128, processor 138, processor 148, or processor 158) outputs a request to read the requested data associated with memory address "X," the request first goes to the associated lower level cache (124-1, 134-1, 144-1, or 154-1, respectively), which provides the data when present and valid. When not present or invalid in the lower level cache, the request is diverted to the associated last level cache (124-2, 134-2, 144-2, or 154-2, respectively), which provides data when present and valid.

When the MSI status in the associated last level cache is marked with an "I" to indicate that the data associated with memory address X is invalid or absent, the device requesting the processor of the read request forwards the request to the (non-broadcast) only home agent 160. The home agent 160 checks the coherency directory 162 in response to the read request, and when the MSI state of the memory address in the coherency directory 162 of the home agent 160 is marked with an "I" to indicate that none of the last level caches has a cache line of memory address X holding a valid copy of the data, the home agent 160 determines the home of the main memory associated with memory address X by reading a memory-address-home indicator or by calculating the home of the main memory from memory address X.

Thereafter, the home agent 160 outputs a fetch instruction to the main memory's only (no broadcast) home output, which outputs the requested data associated with memory address X back to the only (no broadcast) home agent 160. The home agent 160 then forwards the data to the last level cache of the processor that requested the data.

For example, when the processor 138 outputs a request to read data associated with a memory address "X," the request first goes to the lower level cache 134-1, which provides the data when present and valid. When not present or invalid in the lower level cache 134-1, the request is diverted to the last level cache 134-2, which provides data when present and valid.

As shown in FIG. 1, when the MSI state in the last level cache 134-2 is marked with an "I" to indicate that the data associated with memory address X is invalid, the memory 130 forwards the request over the bus 118 to the home agent 160 only. The home agent 160 checks the coherency directory 162, and when the MSI state of the memory address in the home agent 160 is marked with an "I" to indicate that none of the last level caches have a cache line holding a valid copy of the data at memory address X, the home agent 160 determines that the home of the memory address X is within a memory range associated with, for example, the main memory 142 of the second dedicated device 114.

Thereafter, the home agent 160 outputs a fetch instruction to the main memory 142 of only the second dedicated device 114, which outputs data associated with memory address X back to the home agent 160. The home agent 160 then forwards the data to the first dedicated device 112 to be provided to the processor 138.

FIG. 2 shows a block diagram illustrating an example of a consistency update to the processing system 100 in accordance with the present invention. In the example of fig. 2, the first dedicated device 112 outputs the read request and the primary memory is attributed to the primary memory 142 of the second dedicated device 114.

As shown in FIG. 2, home agent 160 updates both the MSI status of memory address X in coherency directory 162 from invalid I to shared S and the Last Level (LL) pointer field to private device 112. In addition, the first private device 112 updates the MSI state of memory address X from invalid I to shared S in the last level cache 134-2.

Fig. 3 shows a timing diagram further illustrating an example of the operation of the processing system 100 in accordance with the present invention. Fig. 3 is illustrated with semantics (operand-instruction name) described in the CXL specification. As shown in fig. 3, requester 1 (e.g., dedicated device 112 in the example of fig. 2) starts in the "I" state and outputs cxl.cache and RdShared instructions to home agent 160 only to request the requested data.

When the home agent 160 is also in the I state, the home agent 160 sends cxl.mem, MemRd (memory read) and SnpData (snoop data) instructions to the home of the memory address (dedicated device 114 in the example of fig. 2), which in turn responds with cxl.mem and MemData (memory data).

Home agent 160 updates the state from I to S and adds a pointer to requester 1 (private device 112 (sharer ═ local device }) in a coherency directory 162 and outputs cxl.cache and H2D data to requester 1 (private device 112), which updates the MSI protocol state from I to S in last level cache 134-2.

Thus, one of the advantages of the present invention is that it allows transferring as few as one cache line from one dedicated device to another, which significantly reduces bus traffic when compared to transferring one page of cache lines. In addition, no programming intervention is required. The programmer need only insert the read instruction, and need not manually track the protocol state during encoding to ensure cache coherency.

Referring again to FIG. 2 and continuing the above example, when the processor 158 of the third dedicated device 116 outputs a request to read the memory address "X", the request first goes to the lower level cache 154-1, which provides data when present and valid. When not present or invalid in the lower level cache 154-1, the request is diverted to the last level cache 154-2, which provides data when present and valid.

As shown in FIG. 2, when the MSI state of a cache line for memory address X in the last level cache 154-2 is marked with an "I" to indicate that the data associated with memory address X is invalid, the memory 150 forwards the request to the Home agent 160 only. In response to all read requests, the home agent 160 checks the coherency directory 162, and when the MSI state of the memory address in the home agent 160 is marked with an "S" to indicate that one or more of the last level caches maintain a valid copy of the data associated with memory address X, the home agent 160 determines from the pointer that the device that has a valid copy, e.g., the first dedicated device 112 in this example, has a valid copy.

Thereafter, the home agent 160 outputs a fetch instruction to the last level cache 134-2 of only the first dedicated device 112, which outputs data associated with the memory address X back to the home agent 160. The home agent 160 then forwards the data to the third dedicated device 116 to be provided to the processor 158.

FIG. 4 shows a block diagram illustrating an example of a consistency update to the processing system 100 in accordance with the present invention. As shown in FIG. 4, the home agent 160 leaves the MSI state for memory address X under shared S, but updates the pointer in the coherency directory 162 to also point to the private device 116. In addition, the memory 150 updates the MSI state of memory address X in the last level cache 154-2 from invalid I to shared S.

Another advantage of the present invention is that data is obtained from a cache line of the last level cache 134-2 much faster than data is obtained from the main memory 142. In this example, the last level caches 124-2, 134-2, 144-2, and 154-2 are implemented in RAM memory, whereas the main storage is implemented in a much slower memory type, such as a hard disk drive.

Fig. 5 shows a timing diagram further illustrating an example of the operation of the processing system 100 in accordance with the present invention. Fig. 5 is also illustrated with semantics (operand-instruction names) described in the CXL specification. As shown in fig. 5, requester 2, such as specialized device 116, starts in the "I" state and outputs cxl.cache and RdShared instructions to coherency director 162 in home agent 160 to request cache line data as before.

When the home agent 160 is in the S state, the home agent 160 sends cxl.mem, MemRd (memory read) and SnpData (snoop data) instructions to the device identified by the pointer as having a valid copy of the requested data (dedicated device 112), which in turn responds with cxl.mem and MemData (memory data). The home agent 160 maintains the state of S, adds the private device 116 to the pointer (sharer + { local device }), and outputs cxl.cache and H2D data to the private device 116, which updates the MSI state from I to S.

Referring again to FIG. 4 and continuing the above example, when the processor 138 outputs a request to write data to a cache line of memory address "X," the request is directed to the last level cache 134-2, which determines whether the last level cache 134-2 has permission to write the data, i.e., whether the cache line of memory address X is in the M state. When in the M state, the last level cache 134-2 accepts write data.

When the MSI state in the last level cache 134-2 is marked with S (or I) to indicate that the data associated with memory address X is shared (or invalid), the memory 150 forwards the write request over the bus 118 to the home agent 160 only. In response to all write requests, the home agent 160 checks the coherency directory 162, and when the MSI state of memory address X in the home agent 160 is marked with an "S" to indicate that the last level cache is sharing a valid copy of the data at memory address X (or marked with an I to indicate that no cache has a valid copy), the home agent 160 changes the state in the coherency directory 162 to a modified "M".

When only the last level cache 134-2 has a valid copy (or no last level cache has a valid copy), the home agent 160 sends an authorization to the last level cache 134-2, which changes the MSI protocol state to M, and then accepts the write data from the processor 138. Thereafter, the last level cache 134-2 writes the data to the home agent 160, which in turn writes the data to the main memory of the home device (the dedicated device 114 in this example).

When the processor 158 (but not the processor 138) outputs a request to write data to a cache line at memory address "X," the request is directed to the last level cache 154-2, which determines whether the last level cache 154-2 has permission to write the data, i.e., whether the cache line at memory address X is in the M state. When in the M state, the last level cache 154-2 accepts write data.

When the MSI status in the last level cache 154-2 is marked with an "S" to indicate that the data associated with memory address X is shared, the memory 150 forwards the request over the bus 118 to the Home agent 160 only. The home agent 160 checks the coherency directory 162, and when the MSI state of memory address X in the home agent 160 is marked with an "S" to indicate that the last level cache is sharing a valid copy of the data at memory address X, the home agent 160 changes the state in the coherency directory 162 to a modified "M" and deletes the private device 112 from the pointer.

In addition, the home agent 160 sends an invalidation message to the private device 112 (shared device), which changes the MSI protocol status to an invalid "I" in the last level cache 134-2. Further, the home agent 160 sends an authorization to the last level cache 154-2, which changes the MSI protocol state to M, and then accepts the write data from the processor 158.

FIG. 6 shows a block diagram illustrating an example of an update to processing system 100 in accordance with the present invention. As shown in FIG. 6, the home agent 160 changes the MSI status of memory address X in the coherency directory 162 from shared S to modified M and updates the pointer to remove the private device 112. In addition, the memory 130 updates the MSI status of memory address X in the last level cache 134-2 to invalid I. The home agent 160 also sends an approval message to the special purpose device 116, which writes data to the last level cache 154-2 and updates the MSI protocol state in the last level cache 154-2 from shared S to modified M. Thereafter, the last level cache 154-2 writes the data to the home agent 160, which in turn writes the data to the home device's main memory (the main memory 142 of the dedicated device 114 in this example).

Fig. 7 shows a timing diagram further illustrating an example of the operation of the processing system 100 in accordance with the present invention. Fig. 7 is also illustrated with semantics (operand-instruction names) described in the CXL specification. As shown in fig. 7, the special purpose device 116 starts in the "S" protocol state and outputs cxl.cache, MemWr (memory write) and or ItoMWr instructions to the home agent 160 to request permission to write new data into the cache line of memory address X.

When the home agent 160 is in the S protocol state, the home agent 160 sends cxl.mem, MemInv (memory invalidate) and SnpInv (snoop invalidate) instructions to the shared device in the pointer (the private device 112), which in turn responds by changing the MSI protocol state in the last level cache 134-2 from shared S to invalid I.

Home agent 160 also updates the MSI protocol state in coherency directory 162 from shared S to modified M. In addition, the home agent 160 sends an approval message to the private device 116, which writes the data to the last level cache 154-2 and updates the MSI protocol state in the last level cache 154-2 from shared S to modified M.

Referring again to FIG. 6 and continuing the above example, when the CPU core 128 outputs a request to read data associated with memory address "X", the request first goes to the associated lower level cache 124-1, which provides the data when present and valid. When not present or invalid in the lower level cache 124-1, the request is diverted to the last level cache 124-2, which provides data when present and valid.

When the MSI state in the last level cache 124-2 is marked with an "I" to indicate that the data associated with memory address X is invalid or absent, the memory 120 forwards the request to the home agent 160 only. The home agent 160 checks the coherence directory 162 and, when the MSI state of the memory address in the home agent 160 is marked with an "M" to indicate that only one last level cache holds a valid copy of the data associated with memory address X, the home agent 160 determines that the private device 116 has a valid copy from the pointer.

Thereafter, the home agent 160 outputs a fetch instruction to the last level cache 154-2 of only the third dedicated device 116, which outputs data associated with the memory address X back to the home agent 160. Thereafter, home agent 160 receives the request data and then forwards it to memory 120 to be provided to CPU core 128.

FIG. 8 shows a block diagram illustrating an example of an update to processing system 100 in accordance with the present invention. As shown in FIG. 8, home agent 160 updates the MSI state of memory address X in coherency directory 162 from modified M to shared S, and updates the pointer to point to host processor 110 as well. In addition, the specialized device 116 updates the MSI state of memory address X in the last level cache 154-2 from modified M to shared S, and the host processor 110 updates the MSI state in the last level cache 124-2 from invalid I to shared S.

Fig. 9 shows a timing diagram further illustrating an example of the operation of the processing system 100 in accordance with the present invention. Fig. 9 is also illustrated with semantics (operand-instruction names) described in the CXL specification. As shown in fig. 9, the last level cache 124-2 begins in the "I" state and outputs cxl.cache and RdShared instructions to the home agent 160 to request cache line data as before.

When the memory address in the coherence director 162 of the home agent 160 is in the M state, the home agent 160 sends cxl.mem and SnpData (snoop data) instructions to the device identified by the pointer (dedicated device 116), which in turn responds with cxl.cache and D2H data. Home agent 160 changes the protocol state from M to S, adds host device 110 to a pointer (sharer + { local device }), and outputs cxl.cache and H2D data to last level cache 124-2, which updates the MSI state from I to S. The dedicated device 116 also changes the MSI status from M to S.

FIG. 10 shows a flow diagram illustrating an example of a method 1000 of operating an inter-device processing system in accordance with the present invention. As shown in FIG. 10, method 1000 begins at step 1010 by receiving a first request with a memory address to read requested data when a cache of a local device (e.g., 112) does not have a valid version of the requested data; . Method 1000 next moves to 1012 to check a coherency directory with a host device (e.g., 110) in response to the first request, and when the coherency directory indicates that none of the devices has a cache line with a valid copy of the requested data, method 1000 moves to 1014 to determine a home device (e.g., 114) associated with the memory address from the coherency directory and output a fetch instruction to the home device to fetch the requested data from the non-cached memory. The first request is addressed only to the host device. The fetch instruction is addressed only to the home device.

Thereafter, the method 1000 moves to 1016 to receive the request data from the home device. Next, method 1000 moves to 1018 to update the coherency state of the memory addresses in the coherency directory from an invalid state to a shared state and then moves to 1020 to forward the request data to the local device.

Reference will now be made in detail to the various embodiments of the disclosure, examples of which are illustrated in the accompanying drawings. From the examples provided, additional examples of the movement of data between host processor 110 and specialized devices 112, 114, and 116 and the consistency process can be readily understood.

While described in conjunction with various embodiments, it is to be understood that these various embodiments are not intended to limit the present disclosure. On the contrary, the present disclosure is intended to cover alternatives, modifications, and equivalents, which may be included within the scope of the present disclosure as construed according to the claims.

Furthermore, in the previous detailed description of various embodiments of the disclosure, numerous specific details were set forth in order to provide a thorough understanding of the disclosure. However, one of ordinary skill in the art will recognize that the disclosure can be practiced without these specific details or with equivalents thereof. In other instances, well-known methods, procedures, components, and circuits have not been described in detail as not to unnecessarily obscure aspects of the various embodiments of the present disclosure.

It should be noted that although the methods may be described herein as a sequence for clarity, the sequence does not necessarily dictate the order of the operations. It should be understood that some of the operations may be skipped, performed in parallel, or performed without maintaining the strict order of the sequences.

The drawings showing various embodiments in accordance with the disclosure are semi-diagrammatic and not to scale and, particularly, some of the dimensions are for the clarity of presentation and are shown exaggerated in the drawing figs. Similarly, although the views in the drawings for ease of description generally show similar orientations, this depiction in the various figures is largely arbitrary. In general, various embodiments in accordance with the present disclosure can be operated in any orientation.

Some portions of the detailed description are presented in terms of procedures, logic blocks, processing, and other symbolic representations of operations on data bits within a computer memory. These descriptions and representations are used by those skilled in the data processing arts to effectively convey the substance of their work to others skilled in the art.

In the present disclosure, a procedure, logic block, process, etc., is conceived to be a self-consistent sequence of operations or instructions leading to a desired result. The operations are those utilizing physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated in a computing system. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as transactions, bits, values, elements, symbols, characters, samples, pixels, or the like.

It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussions, it is appreciated that throughout the present disclosure, discussions utilizing terms such as "generating," "determining," "assigning," "aggregating," "utilizing," "virtualizing," "processing," "accessing," "executing," "storing," or the like, refer to the actions and processes of a computer system, or similar electronic computing device or processor.

A computing system or similar electronic computing device or processor manipulates and transforms data represented as physical (electronic) quantities within the computer system memories, registers, other such information storage devices, and/or other computer readable media into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission, or display devices.

The technical solution in the embodiments of the present application has been clearly and completely described in the previous sections with reference to the drawings of the embodiments of the present application. It should be noted that the terms "first," "second," and the like in the description of the invention, in the claims, and in the above drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that these numbers may be interchanged as appropriate to enable implementation of the embodiments of the invention described herein in an order other than that illustrated or described herein.

The functions described in the operations and methods of the present embodiments can be implemented logically or in software and processing units. If implemented in the form of software functional units and sold or used as a stand-alone product, can be stored in a computing device readable storage medium. Based on this understanding, part of the embodiments of the present application or part of the technical solutions contributing to the prior art may be embodied in the form of a software product stored in a storage medium, which comprises a plurality of instructions for causing a computing device (which may be a personal computer, a server, a mobile computing device or a network device, etc.) to perform all or part of the steps of the methods described in the various embodiments of the present application. The foregoing storage media include: a USB drive, a portable hard disk, a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, an optical disk, etc., which are capable of storing program code.

Various embodiments in the specification of the present application are described in a progressive manner, and each embodiment focuses on its differences from other embodiments, and the same or similar parts between various embodiments may be transferred to another case. The described embodiments are only a part of the embodiments and not all of the embodiments of the present application. All other embodiments obtained by one of ordinary skill in the art based on the embodiments of the present application are within the scope of the present application.

The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be implemented in other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

It should be understood that the foregoing description is illustrative of the invention and that various alternatives to the invention described herein may be employed in practicing the invention. It is therefore intended that the following claims define the scope of the invention and that structures and methods within the scope of these claims and their equivalents be covered thereby.

23页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:信息显示方法、装置、可穿戴设备和可读存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!