Storage node, hybrid memory controller and method for controlling hybrid memory group

文档序号:1270529 发布日期:2020-08-25 浏览:2次 中文

阅读说明:本技术 存储节点、混合存储器控制器及控制混合存储器组的方法 (Storage node, hybrid memory controller and method for controlling hybrid memory group ) 是由 牛迪民 张牧天 郑宏忠 林璇渶 金寅东 于 2017-02-24 设计创作,主要内容包括:一种混合存储器控制器,执行:接收第一中央处理单元(CPU)请求以向所述混合存储器组写入/从所述混合存储器组读取;通过对第一CPU请求译码和地址映射,将所述易失性存储器件识别为所述第一CPU请求的第一目标;在缓冲器中对所述第一CPU请求排队;接收第二CPU请求以向所述混合存储器组写入/从所述混合存储器组读取;通过对所述第二CPU请求译码和地址映射,将所述非易失性存储器件识别为所述第二CPU请求的第二目标;在所述缓冲器中对所述第二CPU请求排队;基于仲裁策略,对第一目标和第二目标中的相关联的一个目标生成与第一CPU请求和第二CPU请求中的一个对应的第一命令,并且响应于生成所述第一命令,对所述第一目标和第二目标中的相关联的另一个目标生成与所述第一CPU请求和所述第二CPU请求中的另一个对应的第二命令;以及向所述易失性存储器件和所述非易失性存储器件中的相应的存储器件发送所述第一命令和第二命令。(A hybrid memory controller that performs: receiving a first Central Processing Unit (CPU) request to write to/read from the hybrid memory bank; identifying the volatile storage device as a first target of a first CPU request by decoding the first CPU request and address mapping; queuing the first CPU request in a buffer; receiving a second CPU request to write/read to/from the hybrid memory bank; identifying the non-volatile storage device as a second target of the second CPU request by decoding and address mapping the second CPU request; queuing the second CPU request in the buffer; generating a first command corresponding to one of a first CPU request and a second CPU request for an associated one of a first target and a second target based on an arbitration policy, and in response to generating the first command, generating a second command corresponding to the other of the first CPU request and the second CPU request for the associated other of the first target and the second target; and sending the first command and the second command to respective ones of the volatile memory device and the non-volatile memory device.)

1. A hybrid memory controller for controlling a hybrid memory bank including volatile memory devices and non-volatile memory devices, the hybrid memory controller comprising:

a processor; and

a processor memory local to the processor, wherein the processor memory has instructions stored thereon, wherein the instructions, when executed by the processor, cause the processor to perform:

receiving a first Central Processing Unit (CPU) request to write to/read from the hybrid memory bank;

identifying the volatile storage device as a first target of a first CPU request by decoding the first CPU request and address mapping;

queuing the first CPU request in a buffer;

receiving a second CPU request to write/read to/from the hybrid memory bank;

identifying the non-volatile storage device as a second target of the second CPU request by decoding and address mapping the second CPU request;

queuing the second CPU request in the buffer;

generating a first command corresponding to one of a first CPU request and a second CPU request for an associated one of a first target and a second target based on an arbitration policy, and in response to generating the first command, generating a second command corresponding to the other of the first CPU request and the second CPU request for the associated other of the first target and the second target; and

sending the first command and the second command to respective ones of the volatile memory device and the non-volatile memory device.

2. The hybrid memory controller of claim 1, wherein the instructions further cause the processor to perform:

the volatile memory device and the non-volatile memory device are identified by detecting associated Serial Presence Detect (SPD) data stored in each of the volatile memory device and the non-volatile memory device.

3. The hybrid memory controller of claim 2, wherein the identification of the volatile memory device and the non-volatile memory device occurs at boot time.

4. The hybrid memory controller of claim 2, wherein identifying the volatile memory device and the non-volatile memory device comprises: addresses map the volatile memory device and the non-volatile memory device.

5. The hybrid memory controller of claim 2, wherein the instructions further cause the processor to perform:

identifying timing parameters of the volatile memory device and the non-volatile memory device based on the associated SPD data; and

determining the arbitration policy based on the timing parameter.

6. The hybrid memory controller of claim 2, wherein the instructions further cause the processor to perform:

receiving a state feedback signal from the non-volatile memory device; and

determining the arbitration policy based on the state feedback signal.

7. The hybrid memory controller of claim 1, wherein the arbitration policy comprises a round-robin arbitration policy or a weighted round-robin arbitration policy based on unbalanced issue speeds of the first queue and the second queue.

8. The hybrid memory controller of claim 1, wherein the non-volatile memory device and the volatile memory device are in different memory banks of the same memory channel.

9. The hybrid memory controller of claim 1, wherein the non-volatile memory device and the volatile memory device are in different memory banks of a same memory bank.

10. The hybrid memory controller of claim 1, wherein the first command and the second command are generated according to a same standard volatile memory command set.

11. The hybrid memory controller of claim 1, wherein one of the first and second commands corresponding to the second target is generated according to a command set different from a standard volatile memory command set.

12. A storage node, comprising:

a hybrid memory bank comprising:

a nonvolatile memory device; and

a volatile storage device coupled to the non-volatile storage device; and

a hybrid memory controller configured to perform data transfer to/from a volatile memory device and a non-volatile memory device through the same channel, the hybrid memory controller comprising:

a processor; and

a processor memory local to the processor, wherein the processor memory has instructions stored thereon, wherein the instructions, when executed by the processor, cause the processor to perform:

identifying the volatile memory device and a non-volatile memory device by detecting associated Serial Presence Detect (SPD) data stored in each of the volatile memory device and non-volatile memory device;

receiving a first Central Processing Unit (CPU) request to write to/read from the hybrid memory bank;

identifying the volatile storage device as a first target of a first CPU request by decoding and address mapping of the first CPU request;

queuing the first CPU request in a buffer;

receiving a second CPU request to write/read to/from the hybrid memory bank;

identifying the non-volatile storage device as a second target of the second CPU request by decoding and address mapping the second CPU request;

queuing the second CPU request in the buffer;

determining an arbitration policy based on SPD data associated with the volatile memory device and the non-volatile memory device;

generating a first command corresponding to one of the first and second CPU requests for an associated one of the first and second targets and, in response, generating a second command corresponding to the other of the first and second CPU requests for an associated other one of the first and second targets based on the arbitration policy; and

sending the first command and the second command to respective ones of the volatile memory device and the non-volatile memory device.

13. A method of controlling a hybrid memory group including a volatile memory device and a non-volatile memory device, the method comprising:

receiving, by a processor, a first Central Processing Unit (CPU) request to write to/read from the hybrid memory bank;

identifying, by the processor, the volatile storage device as a first target of a first CPU request by decoding and address mapping of the first CPU request;

queuing, by the processor, the first CPU request in a buffer;

receiving, by the processor, a second CPU request to write to/read from the hybrid memory bank;

identifying, by the processor, the non-volatile storage device as a second target of a second CPU request by decoding and address mapping of the second CPU request;

queuing, by the processor, the second CPU request in the buffer;

generating, by the processor, a first command corresponding to one of the first and second CPU requests for an associated one of the first and second targets and, in response, a second command corresponding to the other of the first and second CPU requests for an associated other one of the first and second targets based on an arbitration policy; and

sending, by the processor, the first command and the second command to respective ones of the volatile memory device and the non-volatile memory device.

14. The method of claim 13, further comprising:

identifying, by the processor, a volatile memory device and a non-volatile memory device by detecting associated Serial Presence Detect (SPD) data stored in each of the volatile memory device and the non-volatile memory device;

identifying, by the processor, timing parameters of the volatile memory device and non-volatile memory device based on the associated SPD data; and

determining, by the processor, the arbitration policy based on the timing parameter.

15. The method of claim 13, further comprising:

receiving, by the processor, a state feedback signal from the non-volatile storage device; and

determining, by the processor, the arbitration policy based on the state feedback signal.

16. The method of claim 13, wherein the non-volatile memory device and volatile memory device are in different memory banks of the same memory channel.

17. The method of claim 13, wherein the non-volatile memory device and volatile memory device are in different memory banks of the same memory bank.

18. The method of claim 13, wherein the first and second commands are generated according to the same standard volatile memory command set.

19. The method of claim 13, wherein one of the first command and the second command corresponding to the second target is generated according to a command set different from a standard volatile memory command set.

20. A hybrid memory controller for controlling a hybrid memory bank including volatile memory devices and non-volatile memory devices, the hybrid memory controller comprising:

an address mapper/decoder configured to: receiving a first Central Processing Unit (CPU) request and a second CPU request to write to/read from a hybrid memory bank, identifying the volatile storage device as a first target of the first CPU request by decoding and address mapping of the first CPU request, and identifying the non-volatile storage device as a second target of the second CPU request by decoding and address mapping of the second CPU request;

a transaction queue configured to: queuing the received first CPU request and the received second CPU request;

an arbiter configured to: determining an arbitration policy based on Serial Presence Detect (SPD) data associated with the volatile and non-volatile memory devices; and

a scheduler configured to: based on the arbitration policy, generating a first command corresponding to one of the first and second CPU requests for an associated one of the first and second targets and a second command corresponding to the other of the first and second CPU requests for an associated other one of the first and second targets, and sending the first and second commands to respective ones of the volatile and non-volatile memory devices.

Technical Field

Aspects of the present invention relate to the field of memory devices and mechanisms for controlling the memory devices.

Background

Computer systems historically employed a two-tier (two-tier) storage model, wherein the two-tier storage model comprises: fast, byte-addressable memory (i.e., volatile memory) devices that store temporary data that is lost when the system is stopped/rebooted/crashed, and slow, block-addressable memory devices (i.e., non-volatile storage devices) that permanently store persistent data that can survive a system boot/crash.

Volatile memory devices (also called synchronous memory devices) and non-volatile memory devices (also called asynchronous memory devices) have different timing parameters and employ different communication protocols, which makes it difficult to combine the two types of memory devices in one memory space controlled by a single controller. For example, volatile memory devices (such as dynamic random access memory or DRAM) use fixed timing for performing their respective operations (e.g., read/write), while non-volatile memory devices (such as flash memory chips) use variable timing for performing various operations. Non-volatile memory devices are also used in transaction-based systems involving frequent handshakes between the controller and the memory device. However, using volatile memory devices in such environments is generally inefficient because of the reduced bandwidth resulting from frequent handshaking.

The above information disclosed in the background section is only for enhancement of understanding of the present invention and therefore it may contain information that does not form the prior art that is known to a person of ordinary skill in the art.

Disclosure of Invention

Aspects of embodiments of the present invention relate to an adaptive mechanism of multiplexing control logic for a synchronous memory device or an asynchronous memory device.

Aspects of embodiments of the invention relate to a hybrid memory controller and method for controlling a hybrid memory array including at least one volatile memory device and at least one non-volatile memory device. The volatile memory device and the non-volatile memory device may be at the same control channel or even occupy the same memory rank of a channel.

According to some embodiments of the present invention, there is provided a hybrid memory controller for controlling a hybrid memory group including a volatile memory device and a non-volatile memory device, the hybrid memory controller including: a processor; and a processor memory local to the processor, wherein the processor memory has instructions stored thereon, wherein the instructions, when executed by the processor, cause the processor to perform: receiving a first Central Processing Unit (CPU) request to write to/read from the hybrid memory bank; identifying the volatile storage device as a first target of a first CPU request by decoding the first CPU request and address mapping; queuing the first CPU request in a buffer; receiving a second CPU request to write/read to/from the hybrid memory bank; identifying the non-volatile storage device as a second target of the second CPU request by decoding and address mapping the second CPU request; queuing the second CPU request in the buffer; generating a first command corresponding to one of a first CPU request and a second CPU request for an associated one of a first target and a second target based on an arbitration policy, and in response to generating the first command, generating a second command corresponding to the other of the first CPU request and the second CPU request for the associated other of the first target and the second target; and sending the first command and the second command to respective ones of the volatile memory device and the non-volatile memory device.

According to some embodiments of the invention there is provided a storage node comprising: hybrid memory banks and hybrid memory controllers. The hybrid memory bank includes: a nonvolatile memory device; and a volatile memory device coupled to the non-volatile memory device. The hybrid memory controller is configured to perform data transfer to/from a volatile memory device and a non-volatile memory device through the same channel, the hybrid memory controller including: a processor; and a processor memory local to the processor, wherein the processor memory has instructions stored thereon, wherein the instructions, when executed by the processor, cause the processor to perform: identifying the volatile memory device and a non-volatile memory device by detecting associated Serial Presence Detect (SPD) data stored in each of the volatile memory device and non-volatile memory device; receiving a first Central Processing Unit (CPU) request to write to/read from the hybrid memory bank; identifying the volatile storage device as a first target of a first CPU request by decoding and address mapping of the first CPU request; queuing the first CPU request in a buffer; receiving a second CPU request to write/read to/from the hybrid memory bank; identifying the non-volatile storage device as a second target of the second CPU request by decoding and address mapping the second CPU request; queuing the second CPU request in the buffer; determining an arbitration policy based on SPD data associated with the volatile memory device and the non-volatile memory device; generating a first command corresponding to one of the first and second CPU requests for an associated one of the first and second targets and, in response, generating a second command corresponding to the other of the first and second CPU requests for an associated other one of the first and second targets based on the arbitration policy; and sending the first command and the second command to respective ones of the volatile memory device and the non-volatile memory device.

According to some embodiments of the present invention, there is provided a method of controlling a hybrid memory group including a volatile memory device and a nonvolatile memory device, the method including: receiving, by a processor, a first Central Processing Unit (CPU) request to write to/read from the hybrid memory bank; identifying, by the processor, the volatile storage device as a first target of a first CPU request by decoding and address mapping of the first CPU request; queuing, by the processor, the first CPU request in a buffer; receiving, by the processor, a second CPU request to write to/read from the hybrid memory bank; identifying, by the processor, the non-volatile storage device as a second target of a second CPU request by decoding and address mapping of the second CPU request; queuing, by the processor, the second CPU request in the buffer; generating, by the processor, a first command corresponding to one of the first and second CPU requests for an associated one of the first and second targets and, in response, a second command corresponding to the other of the first and second CPU requests for an associated other one of the first and second targets based on an arbitration policy; and sending, by the processor, the first and second commands to respective ones of the volatile and non-volatile memory devices.

According to some embodiments of the present invention, there is provided a hybrid memory controller for controlling a hybrid memory group including a volatile memory device and a non-volatile memory device, the hybrid memory controller including: an address mapper/decoder configured to: receiving a first Central Processing Unit (CPU) request and a second CPU request to write to/read from a hybrid memory bank, identifying the volatile storage device as a first target of the first CPU request by decoding and address mapping of the first CPU request, and identifying the non-volatile storage device as a second target of the second CPU request by decoding and address mapping of the second CPU request; a transaction queue configured to: queuing the received first CPU request and the received second CPU request; an arbiter configured to: determining an arbitration policy based on Serial Presence Detect (SPD) data associated with the volatile and non-volatile memory devices; and a scheduler configured to: based on the arbitration policy, generating a first command corresponding to one of the first and second CPU requests for an associated one of the first and second targets and a second command corresponding to the other of the first and second CPU requests for an associated other one of the first and second targets, and sending the first and second commands to respective ones of the volatile and non-volatile memory devices.

According to some embodiments of the present invention, there is provided a hybrid memory controller for controlling a hybrid memory array including a volatile memory device and a non-volatile memory device, the hybrid memory controller including: a processor; and a processor memory local to the processor, wherein the processor memory has instructions stored thereon that, when executed by the processor, cause the processor to perform: receiving a first Central Processing Unit (CPU) request to write to/read from the hybrid memory array; identifying the volatile storage device as a first target of the first CPU request by decoding and address mapping the first CPU request; queuing the first CPU request in a first buffer; receiving a second CPU request to write/read to/from the hybrid memory bank; identifying the non-volatile storage device as a second target of the second CPU request by decoding and address mapping the second CPU request; queuing the second CPU request in a second buffer; generating a first command corresponding to one of the first and second CPU requests for an associated one of the first and second targets based on an arbitration policy, and in response to generating the first command, generating a second command corresponding to the other of the first and second CPU requests for an associated other of the first and second targets; and sending the first command and the second command to respective ones of the volatile memory device and the non-volatile memory device.

According to some embodiments of the invention there is provided a storage node comprising: a hybrid memory array, comprising: a nonvolatile memory device; and a volatile storage device coupled to the non-volatile storage device; and a hybrid memory controller configured to perform data transfer to/from the volatile memory device and the nonvolatile memory device through the same channel, the hybrid memory controller including: a processor; and a processor memory local to the processor, wherein the processor memory has instructions stored thereon that, when executed by the processor, cause the processor to perform: identifying the volatile memory device and a non-volatile memory device by detecting associated Serial Presence Detect (SPD) data stored in each of the volatile memory device and non-volatile memory device; receiving a first Central Processing Unit (CPU) request to write to/read from the hybrid memory array; identifying the volatile storage device as a first target of the first CPU request by decoding and address mapping the first CPU request; queuing the first CPU request in a first buffer; receiving a second CPU request to write to/read from the hybrid memory array; identifying the non-volatile storage device as a second target of the second CPU request by decoding and address mapping the second CPU request; queuing the second CPU request in a second buffer; determining an arbitration policy based on SPD data associated with the volatile memory device and the non-volatile memory device; generating a first command corresponding to one of the first and second CPU requests for an associated one of the first and second targets based on the arbitration policy, and in response, generating a second command corresponding to the other of the first and second CPU requests for the associated other of the first and second targets; and sending the first command and the second command to respective ones of the volatile memory device and the non-volatile memory device.

According to some embodiments of the present invention, there is provided a method of controlling a hybrid memory array including a volatile memory device and a non-volatile memory device, the method including: receiving, by a processor, a first Central Processing Unit (CPU) request to write to/read from a hybrid memory array; identifying, by the processor, the volatile storage device as a first target of a first CPU request by decoding and address mapping the first CPU request; queuing, by the processor, the first CPU request in a first buffer; receiving, by the processor, a second CPU request to write to/read from the hybrid memory array; identifying, by the processor, the non-volatile storage device as a second target of the second CPU request by decoding and address mapping the second CPU request; queuing, by the processor, the second CPU request in a second buffer; generating, by the processor, a first command corresponding to one of the first and second CPU requests for an associated one of the first and second targets and, in response, a second command corresponding to the other of the first and second CPU requests for an associated other of the first and second targets based on an arbitration policy; and sending, by the processor, the first and second commands to respective ones of the volatile and non-volatile memory devices.

According to some embodiments of the present invention, there is provided a hybrid memory controller for controlling a hybrid memory array including a volatile memory device and a non-volatile memory device, the hybrid memory controller including: an address mapper/decoder configured to: receiving a first Central Processing Unit (CPU) request and a second CPU request to write to/read from a hybrid memory array, identifying the volatile storage device as a first target of the first CPU request by decoding and address mapping of the first CPU request, and identifying the non-volatile storage device as a second target of the second CPU request by decoding and address mapping of the second CPU request; a first transaction queue configured to queue the received first CPU request; a second transaction queue configured to queue the received second CPU request; an arbiter configured to determine an arbitration policy based on SPD data associated with the volatile memory device and non-volatile memory device; and a scheduler configured to: based on the arbitration policy, generating a first command corresponding to one of the first and second CPU requests for an associated one of the first and second targets and a second command corresponding to the other of the first and second CPU requests for the associated other of the first and second targets and sending the first and second commands to respective ones of the volatile and non-volatile memory devices.

Drawings

The drawings illustrate exemplary embodiments of the invention and, together with the description, serve to explain the principles of the invention.

FIG. 1 illustrates a block diagram of a hybrid memory system, according to some embodiments of the invention.

FIG. 2A illustrates a detailed block diagram of a hybrid memory controller in communication with a volatile/non-volatile memory device according to some embodiments of the invention.

FIG. 2B illustrates a detailed block diagram of a hybrid memory controller, according to some other embodiments of the invention.

FIG. 3 illustrates a process for controlling a hybrid memory array including volatile memory devices and non-volatile memory devices using a hybrid memory controller according to some embodiments of the present invention.

Detailed Description

In the following detailed description, certain exemplary embodiments of the present invention are shown and described, simply by way of illustration. As will be recognized by those of skill in the art, the present invention may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Descriptions of features or aspects within each exemplary embodiment should generally be considered available for other similar features or aspects in other exemplary embodiments. Like reference numerals refer to like elements throughout the specification.

FIG. 1 illustrates a block diagram of a hybrid memory system 100, according to some embodiments of the invention.

Referring to fig. 1, a hybrid memory system 100 includes: a hybrid memory controller 110 having one or more memory communication channels (hereinafter referred to as "channels"), and a memory bank (e.g., a hybrid memory bank) 130 including Volatile Memory (VM) devices 132 and non-volatile memory (NVM) devices 134, the Volatile Memory (VM) devices 132 and the non-volatile memory (NVM) devices 134 being coupled to the hybrid memory controller 110 by the same channel 111 and controlled by the hybrid memory controller 110, or residing at the same addressable memory bank (rank). Although fig. 1 shows only a single VM device 132 and a single NVM device 134, this is merely for ease of illustration and embodiments of the present invention are not limited thereto. For example, hybrid memory system 100 can include multiple volatile memories and/or non-volatile memories connected to hybrid memory controller 110 through the same channel 111 as VM device 132 and NVM device 134 and/or through different channels.

In some examples, the volatile memory device 132 (also referred to as a synchronous memory device) may exhibit a fixed latency (e.g., fixed read/write timing) and may include Random Access Memory (RAM), such as dynamic RAM (dram), static RAM, and so forth. In some examples, the non-volatile memory device 134 (also referred to as an asynchronous memory device) may exhibit variable latency (e.g., variable read/write timing) and may include NAND memory, NOR memory, vertical NAND memory, resistive memory, phase change memory, ferroelectric memory, spin-transfer torque memory, and so forth.

According to some embodiments, hybrid memory controller 110 employs an adaptive mechanism for controlling the multiplexing control logic of synchronous memory devices and asynchronous memory devices. In doing so, the hybrid memory controller 110 maps the memory coupled to it by identifying one or more VM devices 132 and one or more NVM devices 134 that make up the memory bank 130 through Serial Presence Detect (SPD) during an initial boot process. The SPD data retrieved (e.g., read) from each of the memory devices identifies the type and capacity of the memory devices and provides information about what timing (e.g., time tCL/tWL of reading or writing a byte of data, etc.) to use to access a particular memory device. Hybrid memory controller 110 operates VM device 132 and NVM device 134 differently according to the corresponding SPD sensing.

In some embodiments, the hybrid memory controller 110 manages the groups and/or channels of each volatile memory device 132 using a synchronous timing protocol (e.g., a synchronous DRAM timing protocol) or an asynchronous communication protocol, and manages the groups and/or channels of each non-volatile memory device 134 using an asynchronous communication protocol.

According to some embodiments, hybrid memory controller 110 may use a standard command set (e.g., a standard DRAM command set) to transmit instructions to each of VM devices 132 and a modified (or redesigned) standard command set or a new command set to transmit instructions to each of NVM devices 134.

FIG. 2A illustrates a detailed block diagram of a hybrid memory controller 110 in communication with a VM/NVM device 132/134 according to some embodiments of the present invention.

Referring to FIG. 2A, hybrid memory controller 110 includes SPD interface 112, address mapper/decoder 114, Volatile Memory (VM) transaction queue 116a, non-volatile memory (NVM) transaction queue 116b, arbiter 118, command queue 120, scheduler 122, and response queue 124.

During the boot process, SPD interface 112 may retrieve SPD data from VM/NVM device (also referred to simply as memory device) 132/134, where the SPD data may be stored in SPD Electrically Erasable Programmable Read Only Memory (EEPROM)136 of memory device 132/134.

According to some embodiments, address mapper/decoder 114 identifies the type of storage device 132/134, i.e., determines whether storage device 132/134 is a volatile (e.g., synchronous) storage device or a non-volatile (e.g., asynchronous) storage device. The address mapper/decoder 114 decodes the memory addresses contained within the SPD data into, for example, a bank, a memory bank, a row, and a column ID (e.g., an index). This may be done by slicing the memory addresses (e.g., picking up a portion of the memory address and discarding the remaining portion). In the example of each group of individual memory devices, the address mapper/decoder 114 may use the group ID to identify the device type. In the example of a same group of hybrid devices (e.g., both VM and NVM devices), the address mapper/decoder 114 may use the group ID and bank ID to identify the device type.

In some embodiments, when hybrid memory controller 110 receives a request (e.g., a write or read request) from a Central Processing Unit (CPU), address mapper/decoder 114 decodes the CPU request to determine whether the CPU request maps to an address corresponding to VM device 132 or NVM device 134, and forwards the decoded CPU request to a corresponding one of VM transaction queue 116a and NVM transaction queue 116 b.

In some embodiments, hybrid memory controller 110 uses a dedicated VM transaction queue 116a for storing CPU requests (e.g., VM transactions/requests) that refer to memory addresses associated with locations of VM device 132, and uses a dedicated NVM transaction queue 116b for storing CPU requests (e.g., NVM transactions/requests) that refer to memory addresses associated with locations of NVM device 134. Having separate VM transaction queues and NVM transaction queues provides arbitration options to the arbiter 118 and may enhance the performance of the hybrid memory controller 110, as described in further detail later. According to some examples, VM transaction queue 116a itself may include (e.g., be divided into) multiple VM transaction queues, each VM transaction queue being associated with a different VM rank of storage device 132. Similarly, NVM transaction queue 116b itself may include (e.g., be divided into) multiple NVM transaction queues, each event queue being associated with a different NVM column of memory bank 130.

Arbiter 118 determines a processing/retrieval order (e.g., priority) of VM and NVM CPU requests maintained in respective ones of VM and NVM transaction queues 116a and 116b based on an arbitration policy, and queues retrieved VM and NVM CPU requests in command queue 120 based on the determined processing order.

The arbitration policy may be defined and updated by basic input/output system (BIOS) and/or SPD data during a system boot period. For example, the arbitration policy may follow a round-robin protocol (where, for example, the arbiter 118 processes a VM CPU request, a NVM CPU request, followed by a VM CPU request, etc.). In some embodiments, the arbitration policy may prioritize entries from VM transaction queue 116a because VM devices tend to have lower access latency than NVM devices. According to some other embodiments, the weighted round robin arbitration policy takes into account an unbalanced issue ratio between VM and NVM transactions. Arbiter 118 may obtain NVM and VM timing parameters from SPD interface 112 and determine the acquisition ratio between VM transaction queue 116a and NVM transaction queue 116 b. For example, assuming that the NVM device has 20 times greater latency than the VM device, 20 VM CPU requests can be processed during 1 NVM device activation, so the fetch ratio can be set to 20: 1.

according to some embodiments, the arbitration policy may be determined based on a state feedback signal 119 received from memory device 132/134. The status feedback signal 119 may indicate whether the memory device 132/134 is available, busy, etc., and in the case of NVM device 134 may even indicate an operation being performed by the device (e.g., garbage collection, etc.), an estimate of when the operation may end, write credits (e.g., the number of unoccupied entries in NVM transaction queue 116b), a cache hit/miss rate when there is cache in NVM device 134, etc. In some examples, arbiter 118 may reduce the fetch speed from NVM transaction queue 116b when state feedback signal 119 indicates NVM activation pending (pending). Furthermore, arbiter 118 may simply issue a VM request while NVM device 134 is busy until feedback signal 119 indicates that the NVM device is again idle. In some examples, when the write credits are large, the arbiter 118 may increase the speed at which NVM requests are issued (e.g., increase the issuance ratio of NVM requests to VM requests), while if the write credits are small, the arbiter 118 may decrease the speed at which NVM requests are issued accordingly (e.g., decrease the issuance ratio of NVM requests to VM requests).

The scheduler 122 may, for example, retrieve transactions queued in the command queue 120 based on a first-in-first-out (FIFO). Scheduler 122 then uses the SPD data (e.g., group and/or channel ID) corresponding to the retrieved transaction (e.g., corresponding to the VM or NVM device 132/134 targeted by the retrieved transaction) to generate the appropriate command corresponding to the retrieved transaction. According to some embodiments, when the retrieved transaction is a VM CPU request, VM timing (e.g., DDR4 timing) may be used in generating the corresponding command, while when the retrieved transaction is an NVM CPU request, a transaction-based communication protocol, such as row address strobe to column address strobe (RAS-CAS) or the like, and certain NVM timing parameters received from the SPD interface 112 may be used to generate the corresponding command.

According to some embodiments, scheduler 122 uses status feedback signal 119 from memory device 132/134 to schedule NVM commands with the appropriate timing. In some embodiments, scheduler 122 may not use feedback signal 119 when issuing VM commands because VM device 132 is a synchronous device and exhibits fixed or preset timing. For example, after activating a memory row, hybrid memory controller 110 may wait a fixed period of time before issuing a write/read command to write/read data. However, because NVM device 134 is asynchronous and exhibits non-fixed timing, scheduler 122 uses feedback signal 119 for timing NVM commands. For example, after activating NVM device 134, hybrid memory controller 110 may not know when to issue a subsequent command until it receives feedback signal 119.

According to some embodiments, scheduler 122 issues commands to NVM device 134 by reusing a standard VM command set (e.g., a DRAM command set). For example, scheduler 122 sends the same activate, read, and write (ACT, RD, and WR) commands to VM device 132 and NVM device 134, and Register Clock Driver (RCD)138 within memory device 132/134 parses the received commands according to their device characteristics and performs the associated action (e.g., activate, read from memory cells 140, or write to memory cells 140).

In some embodiments, scheduler 122 issues commands to NVM device 134 using a different command set than is used with VM device 132. For example, scheduler 122 may send standard DDR ACT, RD, and WR commands to VM device 132, while newly defined ACT _ new, RD _ new, and WR _ new commands may be sent to NVM device 134. For example, a low-high combination of command pins (e.g.,/CS, BG, BA.,. A9-0) at memory device 132/134 that have not been used by a standard command set (e.g., DDR4 command set) may be used to define a new command set for use with NVM device 134. In such an embodiment, NVM device 134 is modified accordingly to be able to parse the new command set. According to some embodiments, the new command set may be sent along the same memory bus as the standard VM command set (e.g., DDR memory bus).

Data read from memory device 132/134 by hybrid memory controller 110 in response to a CPU read request is stored in response queue 124 before being sent to the system CPU.

FIG. 2B illustrates a detailed block diagram of hybrid memory controller 110-1 according to some embodiments of the invention. The hybrid memory controller 110-1 may be the same or substantially the same as the hybrid memory controller 110 described above with respect to FIG. 2, except for the hybrid transaction queue 116-1.

Referring to FIG. 2B, rather than using separate transaction queues for queuing VM and NVM CPU requests, the hybrid memory controller 110-1 uses a hybrid transaction queue 116-1 for storing both types of transactions.

In some embodiments, when hybrid memory controller 110-1 receives a CPU request (e.g., a write or read request), address mapper/decoder 114-1 decodes the CPU request to determine whether the CPU request maps to an address corresponding to VM device 132 or NVM device 134, marks the decoded CPU request as either a VM CPU request or NVM CPU request to identify the corresponding VM device 132 or NVM device 134, and forwards the marked request to hybrid transaction queue 116-1.

According to some embodiments, arbiter 118-1 processes/fetches VM CPU requests and NVM CPU requests queued at hybrid transaction queue 116-1 based on FIFOs, regardless of the type of CPU request. In some other embodiments, arbiter 118-1 combs queued transactions and uses tags to identify VM requests and NVM requests. The arbiter 118-1 determines a processing/retrieval order (e.g., priority) of the VM CPU requests and the NVM CPU requests according to the arbitration policy described with respect to FIG. 2, and queues the retrieved VM CPU requests and NVM CPU requests in the command queue 120 according to the determined processing order. The hybrid memory controller 110-1 may then process the transactions queued in the command queue 120, as described above with respect to FIG. 2A.

FIG. 3 illustrates a process 300 for controlling a hybrid memory bank 130 including a volatile memory device 132 and a non-volatile memory device 134 using a hybrid memory controller 110/110-1, in accordance with some embodiments of the present invention.

In act S302, SPD interface 112 identifies VM device 132 and NVM device 134 in hybrid memory bank 130 by detecting associated SPD data stored in each of the VM device and NVM device (e.g., stored in corresponding spdeprom 136). In some examples, the identification of VM device 132 and NVM device 134 may include mapping the addresses of memory devices 132 and 134 (e.g., determining channel, group, and bank IDs) and their respective timing parameters (e.g., activation, write, and read times). This process may occur at system boot.

In act S304, the address mapper/decoder 114/114-1 receives a first Central Processing Unit (CPU) request and a second CPU request to write to/read from the hybrid memory bank 130.

In act S306, by decoding and address mapping the first CPU request and the second CPU request, the address mapper/decoder 114/114-1 identifies the VM device as a first target of the first CPU request and the NVM device as a second target of the second CPU request. In some examples, VM device 132 and NVM device 134 may be at different memory banks of the same memory channel 111. VM device 132 and NVM device 134 may also be at different memory banks of the same memory bank.

In act S308, address mapper/decoder 114/114-1 queues the first CPU request in a first buffer (e.g., VM transaction queue 116a) and queues the second CPU request in a second buffer (e.g., NVM transaction queue 116 b). In some examples, the first queue may be dedicated to VM transaction/CPU requests while the second queue may be dedicated to NVM transaction/CPU requests. In some examples, the dedicated first and second queues may be separate from each other (i.e., no logical address overlap). In other embodiments, the first queue and the second queue may constitute the same queue (e.g., hybrid memory queue 116-1).

In act S310, the hybrid memory controller 110/110-1 (e.g., arbiter 118/118-1 and scheduler 122) generates a first command corresponding to one of the first CPU request and the second CPU request for an associated one of the first target and the second target based on the arbitration policy, and then generates a second command corresponding to the other of the first CPU request and the second CPU request for the associated other of the first target and the second target. According to some examples, the arbitration policy may include a round-robin arbitration policy or a weighted round-robin arbitration policy based on unbalanced issue speeds of the first queue and the second queue. In some examples, arbiter 118 may determine an arbitration policy based on timing parameters and/or state feedback signals 119 from memory devices 132 and 134. According to some embodiments, the first command and the second command may be generated according to the same standard volatile memory command set (e.g., DDR4 command set). In other embodiments, the first command and the second command corresponding to the second target are generated according to a different command set than the standard volatile memory command set.

In act S312, scheduler 122 sends the first command and the second command to respective ones of VM device 132 and NVM device 134.

Accordingly, embodiments of the present invention propose an adaptive mechanism for multiplexing control logic for synchronous or asynchronous memory devices.

SPD interface 112, address mapper/decoder 114/114-1, transaction and command queues, arbiter 118/118-1, and scheduler 122, and generally hybrid memory controller 100/100-1 may be implemented using any suitable hardware (e.g., an application specific integrated circuit), firmware, software, or a suitable combination of software, firmware, and hardware. For example, the various components of hybrid memory controller 100/100-1, such as SPD interface 112, address mapper/decoder 114/114-1, arbiter 118/118-1, and scheduler 122, may be formed on one Integrated Circuit (IC) chip or on separate IC chips. Further, the various components of hybrid memory controller 100/100-1 may be processes or threads running on one or more processors in one or more computing devices that execute computer program instructions and interact with other system components for performing the various functions described herein. The computer program instructions may be stored in a memory, which may be implemented in a computing device using standard memory devices, such as, for example, Random Access Memory (RAM).

In the appended claims, the processor and processor memory represent a combination of SPD interface 112, address mapper/decoder 114/114-1, arbiter 118/118-1, scheduler 122, and transaction and command queues.

It should be understood that: although the terms "first," "second," "third," etc. may be used herein to describe various elements, components, regions, layers and/or sections, these elements, components, regions, layers and/or sections should not be limited by these terms. These terms are used to distinguish one element, component, region, layer or section from another element, component, region, layer or section. Thus, a first element, component, region, layer or section discussed below could be termed a second element, component, region, layer or section without departing from the spirit and scope of the present inventive concept.

The terminology used herein is for the purpose of describing particular embodiments and is not intended to be limiting of the inventive concept. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that: the terms "comprises," "comprising," "includes" and "including," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items. When preceding a list of elements, expressions such as "at least one of … …" modify the entire list of elements without modifying individual elements of the list. Furthermore, the use of "may" in describing embodiments of the inventive concept refers to "one or more embodiments of the inventive concept.

It will be understood that: when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or one or more intervening elements may be present. When an element is referred to as being "directly connected to" or "directly coupled to" another element, there are no intervening elements present.

As used herein, the terms "using," "using," and "used" may be considered synonymous with the terms "utilizing," "utilizing," and "utilized," respectively.

Although the present invention has been described in detail with particular reference to illustrative embodiments thereof, the embodiments described herein are not intended to be exhaustive or to limit the scope of the invention to the precise forms disclosed. Those skilled in the art to which the invention pertains will appreciate that: variations and changes in the described structures and methods of assembly and operation may be made without departing intentionally from the principle, spirit and scope of the invention, as set forth in the following claims and their equivalents.

18页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种解决STT-RAM缓存写失败的方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类