Adaptive read-ahead cache manager based on detected active flow of read commands

文档序号:1804176 发布日期:2021-11-05 浏览:17次 中文

阅读说明:本技术 基于检测到的读取命令活跃流的自适应预读高速缓存管理器 (Adaptive read-ahead cache manager based on detected active flow of read commands ) 是由 D·A·帕尔默 于 2020-02-28 设计创作,主要内容包括:描述一种用于基于读取命令的一或多个活跃流管理存储器子系统中的预读高速缓存的方法。所述方法包含接收请求来自存储器组件的数据的读取命令,以及基于所述读取命令的地址集合与以下中的一或多个的比较来确定所述读取命令是否是读取命令的活跃流的一部分:(1)命令历史表,其存储各自对应于尚未与活跃流相关联的所接收读取命令的命令条目的集合,或(2)活跃流表,其存储各自对应于读取命令的活跃流的流条目的集合。所述方法进一步包含响应于确定所述读取命令是活跃流的一部分而修改流条目的所述集合中的流条目。(A method for managing read-ahead caches in a memory subsystem based on read commands in one or more active streams is described. The method includes receiving a read command requesting data from a memory component, and determining whether the read command is part of an active stream of read commands based on a comparison of a set of addresses of the read command to one or more of: (1) a command history table that stores a set of command entries that each correspond to a received read command that has not been associated with an active flow, or (2) an active flow table that stores a set of flow entries that each correspond to an active flow of a read command. The method further includes modifying a flow entry in the set of flow entries in response to determining that the read command is part of an active flow.)

1. A method for read-ahead caching in one or more active flow management memory subsystems based on a read command, the method comprising:

receiving a first read command from a host system requesting data from a memory component of the memory subsystem;

determining whether the first read command is part of an active stream of read commands based on a comparison of the set of addresses of the first read command to one or more of: (1) a command history table that stores a set of command entries that each correspond to a received read command that has not been associated with an active flow of read commands, or (2) an active flow table that stores a set of flow entries that each correspond to an active flow of read commands detected by the memory subsystem; and

modifying a flow entry in the set of flow entries in response to determining that the first read command is part of an active flow of read commands being tracked with the flow entry in the active flow table.

2. The method of claim 1, wherein determining whether the first read command is part of an active stream of read commands comprises:

determining whether the set of addresses of the first read command is sequential to addresses of the set of flow entries,

wherein a flow entry in the set of flow entries is modified in response to determining that the set of addresses of the read command is sequential to an address of the flow entry.

3. The method of claim 2, wherein modifying the flow entry includes one or more of: (1) modify a start address of an active stream of a read command represented by the stream entry based on the set of addresses of the first read command, (2) modify an end address of the active stream based on the set of addresses of the first read command, (3) modify a latest command size based on the set of addresses of the first read command, (4) modify a direction of the active stream, the direction indicating whether the host system is requesting a progressively lower numerical address or a progressively higher numerical address, (5) modify a last modification indication corresponding to a modification time of the stream entry, (6) modify a read-ahead cache allocation corresponding to an amount of space in the read-ahead cache allocated to the active stream, or (7) modify a command count corresponding to a number of read commands represented by the active stream.

4. The method of claim 2, wherein determining whether the first read command is part of an active stream of read commands further comprises:

in response to determining that the set of addresses of the read command and the set of addresses of flow entries are non-sequential, determining whether the set of addresses of the first read command and the set of addresses of command entries are sequential; and

adding a new flow entry to the set of flow entries in response to determining that the set of addresses of the first read command is sequential to addresses of command entries in the set of command entries corresponding to a second read command, wherein the new flow entry represents the first read command and the second read command as active flows of read commands.

5. The method of claim 4, further comprising:

in response to determining that the set of addresses for the first read command and the set of addresses for command entries are non-sequential, adding a command entry for the first read command to the command history table.

6. The method of claim 1, further comprising:

allocating space in the read-ahead cache for each flow entry in the set of flow entries; and

populating the read-ahead cache with each stream entry in the set of stream entries based on the allocated space in the read-ahead cache.

7. The method of claim 6, wherein allocating the read-ahead cache with each stream entry in the set of stream entries is based on one or more of: (1) a latest command size indicating a number of addresses in the read command that most recently modified the corresponding stream entry, or (2) a command count indicating a number of read commands represented by the corresponding stream entry.

8. The method of claim 6, wherein allocating space in the read-ahead cache comprises:

determining an allocation request for each flow entry in the set of flow entries;

determining whether the allocation request is requesting more space than the size of the read-ahead cache; and

adjusting the allocation request in accordance with a policy of the memory subsystem.

9. The method of claim 8, wherein the policy includes one of: (1) a fairness policy that allocates equal space in the read-ahead cache to each flow entry in the set of flow entries, (2) a large flow preference policy that allocates more space to larger flow entries than to smaller flow entries, or (3) a small flow preference policy that allocates more space to smaller flow entries than to larger flow entries.

10. A system, comprising:

a read ahead cache;

a memory component; and

a processing device coupled to the memory component and the read-ahead cache, configured to:

determining to allocate space in the read-ahead cache for each active flow of a read command represented in an active flow table, wherein the active flow table includes a set of flow entries, and each flow entry of the set of flow entries represents a separate active flow of read commands, wherein when a first read command is received by the system, a first set of addresses of the first read command is compared to a second set of addresses of flow entries of the set of flow entries to determine whether the first set of addresses of the first read command is sequential to the second set of addresses of the flow entries, such that when the first set of addresses and the second set of addresses are sequential, the second set of addresses of the flow entries are modified to include the first set of addresses;

populating the read-ahead cache with data from the memory component based on the allocation to each flow entry in the set of flow entries; and

attempting to satisfy a second read command based on the data stored in the read-ahead cache.

11. The system of claim 10, wherein the determination to allocate space in the read-ahead cache for each active flow of read commands represented in the active flow table is based on one or more of: (1) a latest command size indicating a number of addresses in read commands that have recently modified a respective active stream, or (2) a command count indicating a number of read commands represented by the respective active stream.

12. The system of claim 10, wherein the determining to make the allocation space in the read-ahead cache for each active flow of the read commands represented in the active flow table comprises:

determining an allocation request for each active flow of a read command represented in the active flow table;

determining whether the allocation request is requesting more space than the size of the read-ahead cache; and

adjusting the allocation request in accordance with a policy of the memory subsystem.

13. The system of claim 12, wherein the policy includes one of: (1) a fairness policy that allocates equal space to each of the active flows in the read-ahead cache, (2) a large flow preference policy that allocates more space to larger active flows than to smaller active flows, or (3) a small flow preference policy that allocates more space to smaller active flows than to larger active flows.

14. A non-transitory computer-readable storage medium comprising instructions that, when executed by a processing device, cause the processing device to:

receiving a first read command from a host system requesting data from a memory component;

determining whether the first read command is part of an active stream of read commands based on a comparison of the set of addresses of the first read command to one or more of: (1) a command history table that stores a set of command entries that each correspond to a received read command that has not been associated with an active flow of the read command, or (2) an active flow table that stores a set of flow entries that each correspond to an active flow of the read command detected by the memory subsystem; and

modifying a flow entry in the set of flow entries in response to determining that the first read command is part of an active flow of read commands being tracked with the flow entry in the active flow table.

15. The non-transitory computer-readable medium of claim 14, wherein determining whether the first read command is part of an active stream of read commands comprises:

determining whether the set of addresses of the first read command is sequential to addresses of the set of flow entries,

wherein a flow entry in the set of flow entries is modified in response to determining that the set of addresses of the read command is sequential to an address of the flow entry.

16. The non-transitory computer-readable medium of claim 15, wherein modifying the flow entry includes one or more of: (1) modify a start address of an active stream of a read command represented by the stream entry based on the set of addresses of the first read command, (2) modify an end address of the active stream based on the set of addresses of the first read command, (3) modify a latest command size based on the set of addresses of the first read command, (4) modify a direction of the active stream, the direction indicating whether the host system is requesting a progressively lower numerical address or a progressively higher numerical address, (5) modify a last modification indication corresponding to a modification time of the stream entry, (6) modify a read-ahead cache allocation corresponding to an amount of space allocated to the active stream in a read-ahead cache, or (7) modify a command count corresponding to a number of read commands represented by the active stream.

17. The non-transitory computer-readable medium of claim 15, wherein determining whether the first read command is part of an active stream of read commands further comprises:

in response to determining that the set of addresses of the read command and the set of addresses of flow entries are non-sequential, determining whether the set of addresses of the first read command and the set of addresses of command entries are sequential; and

adding a new flow entry to the set of flow entries in response to determining that the set of addresses of the first read command is sequential to addresses of command entries in the set of command entries corresponding to a second read command, wherein the new flow entry represents the first read command and the second read command as active flows of read commands.

18. The non-transitory computer-readable medium of claim 17, wherein the instructions further cause the processing device to:

in response to determining that the set of addresses for the first read command and the set of addresses for command entries are non-sequential, adding a command entry for the first read command to the command history table.

19. The non-transitory computer-readable medium of claim 14, wherein the instructions further cause the processing device to:

allocating space in the read-ahead cache for each flow entry in the set of flow entries; and

populating the read-ahead cache with each stream entry in the set of stream entries based on the allocated space in the read-ahead cache,

wherein allocating the read-ahead cache with each stream entry in the set of stream entries is based on one or more of: (1) a latest command size indicating a number of addresses in the read command that most recently modified the corresponding stream entry, or (2) a command count indicating a number of read commands represented by the corresponding stream entry.

20. The non-transitory computer-readable medium of claim 19, wherein allocating space in the read-ahead cache comprises:

determining an allocation request for each flow entry in the set of flow entries;

determining whether the allocation request is requesting more space than the size of the read-ahead cache; and

adjusting the allocation request in accordance with a policy of the memory subsystem,

wherein the policy comprises one of: (1) a fairness policy that allocates equal space in the read-ahead cache to each flow entry in the set of flow entries, (2) a large flow preference policy that allocates more space to larger flow entries than to smaller flow entries, and (3) a small flow preference policy that allocates more space to smaller flow entries than to larger flow entries.

Technical Field

The present disclosure relates generally to a read-ahead cache manager, and more particularly to an adaptive read-ahead cache manager based on a detected active stream of read commands.

Background

The memory subsystem may be a storage system, such as a Solid State Drive (SSD) or a Hard Disk Drive (HDD). The memory subsystem may be a memory module, such as a dual in-line memory module (DIMM), a small DIMM (SO-DIMM), or a non-volatile dual in-line memory module (NVDIMM). The memory subsystem may include one or more memory components that store data. The memory components may be, for example, non-volatile memory components and volatile memory components. In general, a host system may utilize a memory subsystem to store data at and retrieve data from a memory component.

Drawings

The present disclosure will be understood more fully from the detailed description given below and from the accompanying drawings of various embodiments of the disclosure. The drawings, however, should not be taken to limit the disclosure to the specific embodiments, but are for explanation and understanding only.

FIG. 1 illustrates an example computing environment including a memory subsystem in accordance with some embodiments of the present disclosure.

FIG. 2 illustrates a read-ahead cache manager including a stream detector, a read-ahead cache allocator, and a policy engine, according to some embodiments of the present disclosure.

3A-3C are flow diagrams of example methods of managing a read cache ahead based on a detected read command active stream according to some embodiments of the present disclosure.

Fig. 4 illustrates active flow tables according to some embodiments of the present disclosure.

FIG. 5 illustrates a command history table according to some embodiments of the present disclosure.

FIG. 6 illustrates an example read command in accordance with some embodiments of the present disclosure.

Fig. 7 illustrates a set of read command Logical Block Addresses (LBAs) according to some embodiments of the present disclosure.

FIG. 8 illustrates a set of command history LBAs in accordance with some embodiments of the present disclosure.

FIG. 9 is a block diagram of an example computer system in which embodiments of the present disclosure may operate.

Detailed Description

Aspects of the present disclosure are directed to a read-ahead cache manager in a memory subsystem. The memory subsystem is also referred to below as a "memory device". An example of a memory subsystem is a memory module connected to a Central Processing Unit (CPU) via a memory bus. Examples of memory modules include dual in-line memory modules (DIMMs), small DIMMs (SO-DIMMs), non-volatile dual in-line memory modules (NVDIMMs), and the like. Another example of a memory subsystem is a storage device connected to a Central Processing Unit (CPU) via a peripheral interconnect, such as an input/output bus, storage area network, etc. Examples of storage devices include Solid State Drives (SSDs), flash drives, Universal Serial Bus (USB) flash drives, and Hard Disk Drives (HDDs). In some embodiments, the memory subsystem is a hybrid memory/storage subsystem. In general, a host system may utilize a memory subsystem that includes one or more memory components. The host system may provide data to be stored at the memory subsystem and may request data to be retrieved from the memory subsystem.

The read-ahead cache is a component in some memory subsystems, and acts as a smaller and faster/lower latency memory compared to the larger and slower/high latency media of the memory subsystem. Among these memory subsystems, read-ahead cache attempts: (1) predict a next segment of data that the host system will request to read based on a previous read command (sometimes referred to as a read request), and (2) extract this predicted next segment of data from the media before the memory subsystem receives a corresponding read command from the host system. When the prediction is successful (i.e., the read-ahead cache predicts and prefetches a data segment from the medium and the host system subsequently requests this data segment), the improvement in read performance is higher because the memory subsystem can satisfy the request directly from the read-ahead cache without waiting to access the medium (i.e., access the medium was already performed at the time the read command was received by the memory subsystem). By definition, a random read workload (i.e., a read command that does not request access to sequentially addressed portions of the medium) is unpredictable. Thus, the read-ahead cache is efficient for read workloads that contain a certain amount of sequential reads, so that the read-ahead cache can retrieve/store accurate predictions. Despite the potential benefits, some memory subsystems do not include a read-ahead cache for the following reasons: (1) limited memory resources in the memory subsystem that may contribute to the read-ahead cache, and/or (2) poor performance of the read-ahead cache due to an inability to accurately predict the target of a new read command.

Aspects of the present disclosure address the above and other deficiencies by providing a read-ahead cache manager that optimizes the use of a limited amount of memory for read-ahead cache by tracking active read command streams from a host system. By tracking active read command streams (i.e., read command streams that are requesting access to sequential portions of the medium), the read-ahead cache manager can intelligently populate the read-ahead cache with data corresponding to those streams. Specifically, the read-ahead cache manager manages: (1) an active flow table that tracks an active sequential read command flow, and (2) a command history table that tracks recent read commands that have not been associated/added to active flows in the active flow table but that may be associated/added in the future based on subsequently received read commands. For example, upon receiving a read command from the host system, the set of addresses of the read command is compared to the start and end addresses of the active flow in the active flow table. Upon determining that the set of addresses of the read command is sequential with the start or end address of an active flow in the active flow table, the active flow is updated (e.g., the start or end address is updated with the address of the read command). Accordingly, the stream now represents the received read command along with any other read commands previously represented by the stream. Conversely, upon determining that the set of addresses of the read command is non-sequential with the start or end addresses of the active flow in the active flow table, the set of addresses of the read command is compared with the set of addresses of previous read commands in the command history table. In response to determining that the set of addresses of the read command is sequential to the set of addresses of the read command in the command history table, adding a new active flow to the active flow table using the existing read command and the received read command in the command history table. However, in response to determining that the set of addresses for the read command is non-sequential with the set of addresses for the read command in the command history table, an entry is added to the command history table for the received read command so that the read command can potentially be used for inclusion in an active stream based on a later received read command. The size of each of the active flow table and command history table is limited and the read-ahead cache manager controls evictions to maintain the most recently received read command present in the command history table and the most recently updated/active flow present in the active flow table. Further, the read-ahead cache manager populates the read-ahead cache from the medium based on active flows in the active flow tables. Because the active stream represents the most recently received sequentially addressed read command, populating the read-ahead cache based on the active stream provides an intelligent prediction of potential future read commands/requests. That is, the read-ahead cache may be populated with data from addresses in the medium that are adjacent to or otherwise very close to the start and end addresses of the active stream. Further, the amount of space allocated to each stream and the data used to populate the read-ahead cache may be set based on other characteristics of the streams, including the latest command size for modifying each stream, the direction in which each stream extends, the last modification indication for each stream, and the command count represented by each stream. Thus, performance gains based on accurate data prediction can be achieved in the memory subsystem even with limited read-ahead caching.

FIG. 1 illustrates an example computing environment 100 that includes a memory subsystem 110 according to some embodiments of the present disclosure. Memory subsystem 110 may include media, such as memory components 112A-112N. The memory components 112A-112N may be volatile memory components, non-volatile memory components, or a combination of such components. In some embodiments, the memory subsystem is a storage system. An example of a storage system is an SSD. In some embodiments, memory subsystem 110 is a hybrid memory/storage subsystem. In general, the computing environment 100 may contain a host system 120 that uses a memory subsystem 110. For example, the host system 120 may write data to the memory subsystem 110 and read data from the memory subsystem 110. .

The host system 120 may be a computing device, such as a desktop computer, a laptop computer, a network server, a mobile device, or such computing devices that include memory and processing devices. The host system 120 may contain or be coupled to the memory subsystem 110 such that the host system 120 may read data from the memory subsystem 110 or write data to the memory subsystem 110. The host system 120 may be coupled to the memory subsystem 110 via a physical host interface. As used herein, "coupled to" generally refers to a connection between components that may be an indirect communication connection or a direct communication connection (e.g., without intermediate components), whether wired or wireless, including connections such as electrical, optical, magnetic, etc. Examples of physical host interfaces include, but are not limited to, Serial Advanced Technology Attachment (SATA) interfaces, peripheral component interconnect express (PCIe) interfaces, Universal Serial Bus (USB) interfaces, fibre channel, serial attached scsi (sas), and the like. The physical host interface may be used to transmit data between the host system 120 and the memory subsystem 110. When the memory subsystem 110 is coupled with the host system 120 over a PCIe interface, the host system 120 may further utilize an NVM Express (NVMe) interface to access the memory components 112A-112N. The physical host interface may provide an interface for passing control, address, data, and other signals between the memory subsystem 110 and the host system 120.

The memory components 112A-112N may include different types of non-volatile memory components and/or any combination of volatile memory components. Examples of non-volatile memory components include NAND (NAND) type flash memory. Each of the memory components 112A-112N may include one or more arrays of memory cells, such as Single Level Cells (SLC) or multi-level cells (MLC) (e.g., three-level cells (TLC) or four-level cells (QLC)). In some embodiments, a particular memory component may include both SLC and MLC portions of a memory cell. Each of the memory cells may store one or more bits of data (e.g., a block of data) used by the host system 120. Although non-volatile memory components such as NAND type flash memory are described, the memory components 112A-112N may be based on any other type of memory, such as volatile memory. In some embodiments, memory components 112A-112N may be, but are not limited to, Random Access Memory (RAM), Read Only Memory (ROM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic Random Access Memory (SDRAM), Phase Change Memory (PCM), Magnetic Random Access Memory (MRAM), NOR (NOR) flash memory, Electrically Erasable Programmable Read Only Memory (EEPROM), and cross-point arrays of non-volatile memory cells. A cross-point array of non-volatile memory may perform bit storage based on changes in body resistance in conjunction with a stackable cross-meshed data access array. In addition, in contrast to many flash-based memories, cross-point non-volatile memories may perform a write-in-place operation in which non-volatile memory cells may be programmed without pre-erasing the non-volatile memory cells. Further, the memory cells of the memory components 112A-112N may be grouped into memory pages or data blocks, which may refer to the cells of the memory components used to store data.

A memory system controller 115 (hereinafter "controller") may communicate with the memory components 112A-112N to perform operations such as reading data, writing data, or erasing data at the memory components 112A-112N, among other such operations. The controller 115 may include hardware, such as one or more integrated circuits and/or discrete components, a buffer memory, or a combination thereof. The controller 115 may be a microcontroller, special purpose logic circuitry (e.g., a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), etc.), or another suitable processor. The controller 115 may include a processor (processing device) 117 configured to execute instructions stored in a local memory 119. In the example shown, the local memory 119 of the controller 115 includes embedded memory configured to store instructions for performing various processes, operations, logic flows, and routines that control the operation of the memory subsystem 110, including handling communications between the memory subsystem 110 and the host system 120. In some embodiments, local memory 119 may include memory registers that store memory pointers, fetched data, read-ahead cache 121, and so forth. Local memory 119 may also include Read Only Memory (ROM) for storing microcode. Although the example memory subsystem 110 in fig. 1 has been shown to include a controller 115, in another embodiment of the disclosure, the memory subsystem 110 may not include a controller 115 and may instead rely on external control (e.g., provided by an external host, by a processor or controller separate from the memory subsystem).

In general, the controller 115 may receive commands or operations from the host system 120 and may convert the commands or operations into instructions or appropriate commands to achieve the desired access to the memory components 112A-112N. Although the controller 115 may satisfy the commands directly from the memory components 112A-112N, in some embodiments, the controller 115 may satisfy the read commands using the read-ahead cache 121 with assistance from the read-ahead cache manager 113, which the read-ahead cache manager 113 populates with data from the memory components 112A-112N corresponding to the active stream of detected read commands. The controller 115 may be responsible for other operations such as wear leveling operations, garbage collection operations, error detection and Error Correction Code (ECC) operations, encryption operations, cache operations, and address translation between logical and physical block addresses associated with the memory components 112A-112N. The controller 115 may further include host interface circuitry to communicate with the host system 120 via a physical host interface. The host interface circuitry may convert commands received from the host system into command instructions to access the memory components 112A-112N and convert responses associated with the memory components 112A-112N into information for the host system 120.

Memory subsystem 110 may also include additional circuitry or components not shown. In some embodiments, the memory subsystem 110 may include a cache or buffer (e.g., read-ahead cache 121) and address circuitry (e.g., row decoder and column decoder) that may receive addresses from the controller 115 and decode the addresses to access the memory components 112A-112N.

Memory subsystem 110 includes a read-ahead cache manager 113 that can intelligently allocate space for and fill read-ahead cache 121. In some embodiments, controller 115 includes at least a portion of read-ahead cache manager 113. For example, the controller 115 may include a processor 117 (processing device) configured to execute instructions stored in the local memory 119 for performing the operations described herein. In some embodiments, read-ahead cache manager 113 is part of host system 120, an application, or an operating system.

Read-ahead cache manager 113 can intelligently allocate space and populate read-ahead cache 121 (sometimes referred to as look-ahead cache 121) with data from memory components 112A through 112N based on the active stream of detected read commands. FIG. 2 is a component diagram of read-ahead cache manager 113 according to one embodiment. As shown in FIG. 2, read-ahead cache manager 113 may comprise: (1) a stream detector 113A to detect active streams of the read command using the active stream table and one or more of the command history table, (2) a read-ahead cache allocator 113B to determine allocation requests in the read-ahead cache 121 for each active stream, and (3) a policy engine 113C to modify the allocation requests based on the active policies when the allocation requests from the read-ahead cache allocator 113B exceed the capacity of the read-ahead cache 121. Additional details regarding the operation of read-ahead cache manager 113 are described below.

Fig. 3A-3C are flow diagrams of an example method 300 of managing a read cache 121 based on a detected read command active stream, according to some embodiments of the present disclosure. The method 300 may be performed by processing logic that may comprise hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuits, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof. In some embodiments, the method 300 is performed by the read-ahead cache manager 113 of fig. 1, which includes one or more of the stream detector 113A, the read-ahead cache allocator 113B, and the policy engine 113C of fig. 2. Although shown in a particular sequence or order, the order of the processes may be modified unless otherwise specified. Thus, it should be understood that the illustrated embodiments are examples only, and that the processes shown may be performed in a different order, and that some processes may be performed in parallel. Additionally, one or more processes may be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other process flows are possible.

At operation 302, the processing device initiates an active flow table and a command history table. For example, upon startup/power-up of the processing device, the processing device initiates an active flow table and a command history table. As will be described in more detail herein, the active flow table tracks detected read command active flows, while the command history table tracks received read commands that have not been associated with a flow but will be examined for new active flows that may be included with existing active flows in the active flow table or used to form the active flow table. In one embodiment, the processing device may initiate an active flow table and command history table by allocating memory for a predefined number of entries of each table in local memory 119 or in another piece of memory at operation 302.

Fig. 4 shows an active flow table 400, and fig. 5 shows a command history table 500, according to one example embodiment. As shown in fig. 4, active flow table 400 includes a set of entries 402 (sometimes referred to as flow entries 402), each entry corresponding to a detected read command active flow, and each entry 402 includes: (1) a starting Logical Block Address (LBA)404 corresponding to a starting LBA of the active flow for entry 402 in memory components 112A through 112N, (2) an ending LBA406 corresponding to an ending LBA of the active flow for entry 402 in memory components 112A through 112N, (3) a latest length 408 (sometimes referred to as a latest command size 408 or a latest command length 408) corresponding to a length in a last read command to update entry 402 (i.e., a number of LBAs), (4) a direction 410 indicating a direction in which the active flow of entry 402 has recently expanded (i.e., whether the starting LBA 404 or ending LBA406 of entry 402 has been modified in response to a previous read command to update entry 402), (5) a last modified indication 412 indicating a point in time at which entry 402 was last modified, (6) a read-ahead cache allocation 414 indicating an amount of space allocated in read-ahead cache 121 to the active flow of entry 402, and (7) a command count 416 that indicates the number of read commands currently represented in the active stream of entry 402.

In some embodiments, the active flow table 400 has a predefined number of entries 402, and as new entries 402 are determined to be added to the active flow table 400, the oldest entries 402 are evicted from the active flow table 400. In particular, an entry with the oldest last modification indication 412 is evicted from active flow table 400, as will be described in more detail below. Although described herein as the processing device using the last modified indication 412 of the entry 402 in the active flow table 400 as an indication of potential future use, in other embodiments, the processing device may use other indicators of potential future use to direct eviction of the entry 402 from the active flow table 400. For example, the processing device may record in an entry 402 of the active flow table 400 when the corresponding data in the read-ahead cache 121 has been used to satisfy the memory request. In this embodiment, the processing device may utilize this indication of use of the data in the read-ahead cache 121 to direct eviction of the corresponding entry 402 from the active flow table 400.

As used herein, the active flow of read commands indicates a sequential/contiguous set of read commands for which the read command was most recently received (i.e., the set of addresses of the first read command of the active flow is sequential/contiguous with the set of addresses of the second read command of the active flow) (e.g., the read command in the set of read commands for the active flow was most recently received such that the corresponding entry 402 has not been evicted from the active flow table 400 in accordance with the eviction policy of the active flow table 400).

As shown in FIG. 5, command history table 500 may include a set of entries 502, each entry including a starting LBA504 and a length 506 (sometimes referred to as a command size 506 or a command length 506), and each entry 502 corresponding to a read command most recently received from host system 120. In particular, each read command includes a start LBA504 corresponding to the LBA in the memory components 112A-112N, and a length 506 corresponding to the number of LBAs from the indicated start LBA 504. In some embodiments, the command history table 500 may be implemented as a first-in-first-out (FIFO) data structure such that the command history table 500 stores entries 502 of recently received read commands. In these embodiments, when command history table 500 is full, entry 502 of the newly received read command occupies the position of the oldest entry 502 in command history table 500 (i.e., the oldest entry 502 is evicted and entry 502 of the newly received read command is added to command history table 500).

For purposes of illustration, method 300 will be described using example command history table 500 and active flow table 400. In one embodiment, the processing device initiates command history table 500 and active flow table 400 at operation 302 without entries 502 and 402, respectively, but the processing device has allocated space in local memory 119 or another piece of memory for adding entries 502 and 402. In particular, the processing device allocates a fixed amount of space in local memory 119 for active flow table 400 and a fixed amount of space in local memory 119 for command history table 500. Thus, based on the fixed space in local memory 119, active flow table 400 may support a fixed number of entries 402 and command history table 500 may support a fixed number of entries 502. The processing device allocates an amount of space for each of the active flow table 400 and the command history table 500 based on the defined or otherwise required number of entries 402 and 502 in each table 400 and 500, respectively.

At operation 304, the processing device receives a read command from the host system 120. For example, the host system 120 may determine that data stored in the memory components 112A-112N is desired and transmit a read command to the controller 115 at operation 304. The read command includes a starting LBA and a length (i.e., the number of LBAs from the starting LBA) corresponding to the LBAs in the memory components 112A-112N. For example, FIG. 6 shows an example read command 600 received from the host system 120, according to one example embodiment. As shown in FIG. 6, read command 600 includes a start LBA602 and a length 604. Although shown only as having a starting LBA602 and length 604, read command 600 may include an additional piece of information. For purposes of illustration, the method 300 will be described using the example read command 600 of FIG. 6.

At operation 306, the processing device determines whether the read command 600 results in a hit in the read-ahead cache 121. For example, when the read-ahead cache 121 includes data for a start LBA602 and a sequential LBA set of length 604 indicated in the read command 600, the processing device determines a hit in the read-ahead cache 121 at operation 306. In response to the processing device determining a hit at operation 306, the method 300 moves to operation 308.

At operation 308, the processing device returns data from read-ahead cache 121 to host system 120 based on a hit in read-ahead cache 121 for read command 600. In particular, the processing device returns data from the start LBA602 and the sequential LBA set extending from the start LBA602 for the length 604 indicated in the read command 600 to the host system 120. This returned data was previously cached/stored in read-ahead cache 121 based on the detected active stream, as will be described in more detail below. Because the data stored in read-ahead cache 121 is now utilized (e.g., the data in read-ahead cache 121 becomes data useful to host system 120/live data), performance gains are realized by host system 120 and/or the processing device (e.g., memory subsystem 110). That is, the host system 120 and the processing device need not wait to read data from the memory components 112A-112N to satisfy the read command 600.

Returning to operation 306, in response to the processing device determining a miss in the read-ahead cache 121, the method 300 moves to operation 310. At operation 310, the processing device returns data from the memory components 112A-112N to the host system 120 based on the read command 600. In particular, the processing device returns data from the start LBA602 and the sequential LBA set extending from the start LBA602 for the length 604 indicated in the read command 600 to the host system 120.

In some embodiments, a hit may be determined at operation 306 when only some of the data requested by the read command 600 is present in the read-ahead cache 121. In these embodiments, the method 300 may move to operation 310 after operation 308 so that the processing device may return the remaining data from the memory components 112A-112N to the host system 120.

After operation 308 (i.e., returning data from read-ahead cache 121 to host system 120) or operation 310 (i.e., returning data from memory components 112A-112N to host system 120), method 300 moves to operation 312. At operation 312, the processing device determines whether the LBA of the received read command 600 immediately precedes or follows the flow/entry 402 in the active flow table 400 (i.e., the processing device determines whether the read command 600 is sequential to the flow/entry 402 in the active flow table 400). In particular, as described above, the read command 600 indicates a set of sequential LBAs. For example, FIG. 7 shows a set of read command LBAs 700 that start at the start LBA602 of the read command 600 and extend the length 604 of the read command 600. As shown, the set of read command LBAs 700 includes a first LBA702 (i.e., a starting LBA602 of read command 600) and a last LBA 704. The method 300 will be further explained using a set of read command LBAs 700.

At operation 312, the processing device determines: (1) whether the last LBA704 in the set of read command LBAs 700 immediately precedes the starting LBA 404 of the flow/entry 402 in the active flow table 400, or (2) whether the first LBA702 in the set of read command LBAs 700 immediately follows the ending LBA406 of the flow/entry 402 in the active flow table 400. When the processing device determines at operation 312 that the LBA of the received read command 600 (i.e., the set of read command LBAs 700) immediately precedes or follows the flow/entry 402 in the active flow table 400, then the method 300 moves to operation 314.

At operation 314, the processing device updates one or more flows/entries 402 in the active flow table 400 based on determining that the LBAs of the received read command 600 immediately precede or follow the flow/entry 402 in the active flow table 400 (i.e., the set of read command LBAs 700 is sequential to the flow/entry 402). For example, when the processing device determines at operation 312 that the last LBA704 in the set of read command LBAs 700 immediately precedes the starting LBA 404 of the flow/entry 402 in the active flow table 400, the processing device updates the starting LBA 404 of the entry 402 to be equal to the first LBA702 in the set of read command LBAs 700. When the processing device determines at operation 312 that the first LBA702 in the set of read command LBAs 700 immediately follows the end LBA406 of the flow/entry 402 in the active flow table 400, the processing device updates the end LBA406 of the flow/entry 402 to be equal to the last LBA704 in the set of read command LBAs 700. In either case, the processing device is further to: (1) updating the latest command size 408 value of entry 402 with length 604 from received read command 600, (2) updating direction 410 of entry 402 based on whether start LBA 404 or end LBA406 is modified (e.g., when processing device modified start LBA 404, processing device modified direction 410 to indicate that flow/entry 402 is growing in a lower LBA direction, and when processing device modified end LBA406, processing device modified direction 410 to indicate that flow/entry 402 is growing in a higher LBA direction), (3) updating last modification indication 412 to indicate the time of the current modification to flow/entry 402, and (4) updating command count 416 by incrementing command count 416 to indicate that read command 600 received at operation 304 is included in flow/entry 402.

In some embodiments, the processing device determines at operation 312 that the last LBA704 in the set of read command LBAs 700 is immediately before the starting LBA 404 of the first flow/entry 402 in the active flow table 400, and the first LBA702 in the set of read command LBAs 700 is immediately after the ending LBA406 of the second flow/entry 402 in the active flow table 400 (i.e., the set of read command LBAs 700 is sequential and merges the two flows/entries 402). In this scenario, the processing device combines the first and second flows/entries 402 into a single flow/entry 402 at operation 314 and discards/removes the remaining flows/entries 402 from the active flow table 400. In particular, the combined stream/entry 402 includes a minimum/lowest starting LBA 404 from the first and second streams/entries 402 as a starting LBA 404 of the combined entry 402 and a maximum/highest ending LBA406 from the first and second streams/entries 402 as an ending LBA406 of the combined entry 402. For the combination entry 402, the processing device further: (1) updating the latest command size 408 value of the entry 402 with the length from the received read command 600, (2) updating the direction 410 of the stream/entry 402 to the default direction (e.g., incrementing the LBA) because it is currently ambiguous in which direction the stream is extending, (3) updating the last modification indication 412 to indicate the time of the current modification to the stream/entry 402, and (4) updating the command count 416 by adding the command counts 416 from each of the first and second streams/entries 402 and incrementing the sum by one to account for the received read command 600.

Returning to operation 312, when the processing device determines at operation 312 that the LBA of the read command 600 does not immediately precede or follow the flow/entry 402 in the active flow table 400, the method 300 moves to operation 316. At operation 316, the processing device determines whether the LBAs of the read command 600 are sequential to any entry 502 in the command history table 500. In particular, each entry 502 in the command history table 500 may include a set of command history LBAs that includes a starting LBA504 of the entry 502, and that sequentially extends from the starting LBA504 to a length 506 of the entry 502. For example, FIG. 8 shows a set of command history LBAs 800 for entry 502 that starts at the starting LBA504 of entry 502 and extends the length 506 of entry 502. As shown, the set of command history LBAs 800 includes a first LBA 802 (i.e., the starting LBA504 of entry 502) and a last LBA 804. The method 300 will be further explained using a set of command history LBAs 800.

The processing device determines that the LBAs of read command 600 are sequential with entries 502 in command history table 500 when the following occurs: (1) the first LBA702 in the set of read command LBAs 700 immediately follows the last LBA 804 of the set of command history LBAs 800 of entry 502, or (2) the last LBA704 in the set of read command LBAs 700 immediately precedes the first LBA 802 of the set of command history LBAs 800 of entry 502. When the processing device determines at operation 316 that the LBA of the read command 600 is not sequential with an entry 502 in the command history table 500, the processing device has not detected a new active flow of read commands, and the method 300 moves to operation 318 to add an entry 502 in the command history table 500 for the read command 600.

At operation 318, the processing device determines whether the command history table 500 is full. As described above, command history table 500 has a fixed size (i.e., command history table 500 is allocated a fixed amount of memory with a corresponding fixed number of entries 502). When the processing device determines at operation 318 that the command history table 500 is full (i.e., all of the fixed number of entries 502 are being used for other read commands), the method 300 moves to operation 320.

At operation 320, the processing device evicts the oldest entry 502 in the command history table 500 so that the processing device may add a new entry 502 to the command history table 500 for the read command 600 received at operation 304. As described above, the command history table 500 may be a FIFO structure such that the command history table 500 tracks the order in which entries 502 were added to the command history table 500, such that the processing device may remove the oldest entry 502 (i.e., via a pointer to the oldest entry 502 added to the command history table 500) at operation 320.

After operation 320 (i.e., when the command history table 500 is full) or after operation 318 (i.e., when the command history table 500 is not full), the processing device adds a new entry 502 to the command history table 500 at operation 322 for the read command 600 received at operation 304. This new entry 502 includes a start LBA504 set to the start LBA602 indicated in the read command 600 and a length 506 set to the length 604 indicated in the read command 600. Accordingly, the read command 600 is added to the command history table for possible later use in detecting a new read command active stream or modifying an existing read command active stream based on subsequently received read commands.

Returning to operation 316, when the processing device determines at operation 316 that the LBA of the read command 600 is sequential with the entry 502 in the command history table 500, the processing device has detected a new active stream of read commands and the method 300 moves to operation 324.

At operation 324, the processing device determines whether the active flow table 400 is full. As described above, active flow table 400 has a fixed size (i.e., active flow table 400 is allocated a fixed amount of memory with a corresponding fixed number of entries 402). When the processing device determines that the active flow table 400 is full (i.e., all of the fixed number of entries 402 of the active flow table 400 are being used for other active flows), the method 300 moves to operation 326.

At operation 326, the processing device determines whether the oldest entry 402 in the active flow table 400 is younger than a predefined age threshold (i.e., the processing device compares the age threshold to the last modified indication 412 for each entry 402). In particular, the comparison to the age threshold is utilized to ensure that entries 402 in the active flow table 400 are not continually evicted from the active flow table 400 before the corresponding data in the read-ahead cache 121 may have an opportunity to be utilized. If the processing device determines at operation 326 that the oldest entry 402 in active flow table 400 is younger than the predefined age threshold, the method 300 moves to operation 318 such that the received read command 600 is added to command history table 500.

Conversely, when the processing device determines at operation 326 that the oldest entry 402 in the active flow table 400 is older than the predefined age threshold, the method 300 moves to operation 328. At operation 328, the processing device evicts the oldest entry 402 in the active flow table 400. Because evicted entry 402 is no longer located in active flow table 400, the processing device will not allocate space in read-ahead cache 121 for evicted entry 402 when the processing device allocates space in read-ahead cache 121. Thus, the processing device will not include the data of evicted entry 402 in read-ahead cache 121.

After the processing device evicts the oldest entry 402 in the active flow table 400 at operation 328 or after the processing device determines that the active flow table 400 is not full at operation 324, the method 300 moves to operation 330. At operation 330, the processing device adds an entry 402 in the active flow table 400 for both: (1) a read command 600 received at operation 304, and (2) an entry 502 determined by the processing device in the command history table 500 to be sequential to the read command at operation 316. In particular, the processing device adds an entry 402 to the active flow table 400, wherein: (1) the starting LBA 404 of entry 402 is set equal to the first LBA702 in the set of read command LBAs 700 and the lower value/minimum value in the first LBA 802 of the set of command history LBAs 800 of entry 502 in command history table 500 that the processing device determines to be sequential with read command 600; (2) the end LBA406 of entry 402 is set equal to the last LBA704 in the set of read command LBAs 700 and the larger/maximum value in the last LBA 804 of the set of command history LBAs 800 of entry 502 in command history table 500 that the processing device determines to be sequential to read command 600; (3) the latest command size 408 value of entry 402 has a length 604 from received read command 600; (4) the direction 410 of the entry 402 is based on whether the set of read command LBAs 700 is less than the set of command history LBAs 800 of the entry 502 in the command history table 500 that the processing device determines to be sequential with the read command 600 (e.g., when the set of read command LBAs 700 is less than the set of command history LBAs 800 of the entry 502, the processing device sets the direction 410 to indicate that the flow/entry 402 is growing in the direction of lower LBAs, and when the set of read command LBAs 700 is greater than the set of command history LBAs 800 of the entry 502, the processing device sets the direction 410 to indicate that the active flow of the entry 402 is growing in the direction of larger LBAs); (5) last modified indication 412 indicates a time of the current modification to entry 402; and (6) the command count 416 is set to two, reflecting the read command 600 and the entry 502 in the command history table 500 that the processing device determines to be sequential to the read command.

In some embodiments, the processing device may determine at operation 316 that the read command 600 is sequential with two entries 502 in the command history table 500. In these embodiments, at operation 330, the two entries 502 are combined with the read command to form a single entry 402 in the active flow table 400.

At operation 332, the processing device removes the entry 502 (or entries 502) in the command history table 500 that the processing device determined at operation 316 to be sequential to the read command 600. In particular, this entry 502 is removed from the command history table 500 because the respective read command is now represented by the active flow/entry 402 in the active flow table 400.

After operation 314 or operation 332, the processing device allocates a space segment to each flow/entry 402 in active flow table 400 in read-ahead cache 121 at operation 334. Thus, the previously evicted entry 402 will no longer contain data in the read-ahead cache 121 in favor of other (possibly new) entries 402 from the active flow table 400. The spatial segments in the read-ahead cache 121 are the portions of the read-ahead cache 121 that are available to store data from the memory components 112A-112N, as will be described below. The processing device may use one or more pieces of information in the active flow table 400 to allocate a spatial segment in the read-ahead cache 121. For example, the processing device may determine whether to allocate a large spatial segment or a small spatial segment for the stream/entry 402 based on the latest command size 408 of the stream/entry 402 (e.g., when the latest command size 408 of a first stream/entry 402 is greater than the latest command size 408 of a second stream/entry 402, the processing device may allocate a larger spatial segment in the read-ahead cache 121 for the first stream/entry 402 as compared to the second stream/entry 402). In another example, the processing device may determine whether to allocate a large spatial segment or a small spatial segment for the stream/entry 402 based on the last modified indication 412 for the stream/entry 402 (e.g., when a first stream/entry 402 has a last modified indication 412 that is more recent than the last modified indication 412 of a second stream/entry 402, the processing device may allocate a larger spatial segment in the read-ahead cache 121 for the first stream/entry 402 than the second stream/entry 402). In yet another example, the processing device may determine whether to allocate a large spatial segment or a small spatial segment for the stream/entry 402 based on the command count 416 of the stream/entry 402 (e.g., when the command count 416 of a first stream/entry 402 is greater than the command count 416 of a second stream/entry 402, the processing device may allocate a larger spatial segment in the read-ahead cache 121 for the first stream/entry 402 as compared to the second stream/entry 402). In some embodiments, the processing device updates the read-ahead cache allocation 414 for each flow/entry 402 in the active flow table 400 based on the allocation determined at operation 334.

In one embodiment, the processing device determines a read-ahead cache allocation 414 for each entry 402 based on the latest command size 408 of the stream/entry 402 and the command count 416 of the stream/entry 402. For example, the read-ahead cache allocation 414 for the stream/entry 402 may be the sum or product of the latest command size 408 and the command count 416 of the stream/entry 402.

In some embodiments, if the read command 600 produces a hit in the read-ahead cache 121 at operation 306, the processing device may use the characteristics of this hit to adjust the read-ahead cache allocation 414 for each entry 402 at operation 334. For example, when the hit portion is serviced by read-ahead cache 121 (i.e., some data of read command 600 is present in read-ahead cache 121, but some data of read command 600 is not present), the allocation for the respective stream/entry 402 may be increased to account for this deficiency. Conversely, when the hit does correspond to all data (i.e., all data of read command 600 is present in read-ahead cache 121) but some data of stream/entry 402 is not used/returned because of being requested by non-read command 600, the allocation for the respective stream/entry 402 may be reduced to account for this overshoot.

At operation 336, the processing device determines whether the allocation from operation 334 exceeds the storage capacity of the read-ahead cache 121. In particular, read-ahead cache 121 is of a fixed/limited size, and the allocation of fragments from operation 334 may have exceeded the limited capacity of read-ahead cache 121. In response to the processing device determining at operation 336 that the allocation from operation 334 exceeds the storage capacity of the read-ahead cache 121, the method 300 moves to operation 338.

At operation 338, the processing device adjusts the allocation request based on the policy. For example, the processing device may use a fairness policy or a round robin policy to adjust the allocation requests such that each flow/entry 402 in the active flow table 400 receives equal space in the read-ahead cache 121. In another example, the processing device may use a maximum chunk size allocation policy to adjust the allocation request. For example, the processing device may allocate space in the read-ahead cache 121 to the flows/entries 402 that have the largest initial allocation request first or the largest command count 416 first, such that these flows/entries 402 are provided with the largest throughput. In yet another example, the processing device may use a minimum chunk size allocation policy to adjust the allocation request. For example, the processing device may allocate space in the read-ahead cache 121 to the stream/entry 402 with the smallest initial allocation request first or the smallest command count 416 first to provide space in the read-ahead cache 121 for as many streams/entries 402 as possible. In some embodiments, the processing device updates the read-ahead cache allocation 414 for each flow/entry 402 in the active flow table 400 based on the determined allocation update at operation 340.

After operation 336 (i.e., the allocation request does not exceed the capacity of the read-ahead cache 121) or after operation 338 (i.e., after the allocation request is adjusted), the processing device fills/fills each segment of the read-ahead cache based on the allocation request determined at operation 334 and possibly updated at operation 340. For example, for each flow/entry 402 in active flow table 400, the processing device retrieves data from memory components 112A-112N and stores the retrieved data in the allocated spatial segment in read-ahead cache 121. The processing device may use one or more pieces of information from the active flow table 400 to fill/fill the read-ahead cache 121. For example, the processing device may determine, based on direction 410 of flow/entry 402, whether to fill/populate the allocated segment of read-ahead cache 121 with an FBA of memory components 112A-112N extending from start FBA 404 or end FBA 406.

As described above, read-ahead cache manager 113 populates read-ahead cache 121 from the medium (i.e., memory components 112A through 112N) based on active flow/entry 402 in active flow table 400. Because active flow/entry 402 represents a recently received sequentially addressed read command, populating read-ahead cache 121 based on active flow/entry 402 provides an intelligent prediction of potential future read commands/requests. That is, the read-ahead cache 121 may be populated with data from addresses in the medium that are adjacent to or otherwise very close to the start and end addresses 404 and 406 of the active stream/entry 402. Further, the amount of space allocated to each active stream/entry 402 and the data used to populate the read-ahead cache 121 may be set based on other characteristics of the active streams/entries 402, including the latest command size 408 used to modify each stream/entry 402, the direction 410 in which each stream/entry 402 extends, the latest modification indication 412 for each stream/entry 402, and/or the command count 416 represented by each stream/entry 402. Thus, performance gains based on accurate data predictions can be achieved in the memory subsystem 110 even with limited read-ahead caching 121.

As described above, in some embodiments, the method 300 is performed by the read-ahead cache manager 113 of fig. 1 (i.e., the processing device is the read-ahead cache manager 113) that includes one or more of the stream detector 113A, the read-ahead cache allocator 113B, and the policy engine 113C of fig. 2. For example, operations 302-332 may be performed by the flow detector 113A, operation 334 may be performed by the read-ahead cache allocator 113B, and operations 336-338 may be performed by the policy engine 113C.

Fig. 9 illustrates an example machine of a computer system 900 within which a set of instructions for causing the machine to perform any one or more of the methodologies discussed herein may be executed. In some embodiments, computer system 900 may correspond to a host system (e.g., host system 120 of fig. 1) that includes, is coupled to, or utilizes a memory subsystem (e.g., memory subsystem 110 of fig. 1) or may be used to perform operations of a controller (e.g., execute an operating system to perform operations corresponding to read-ahead cache manager 113 of fig. 1). In alternative embodiments, the machine may be connected (e.g., networked) to other machines in a LAN, an intranet, an extranet, and/or the internet. The machine may operate in the capacity of a server or a client machine in a client-server network environment, as a peer machine in a peer-to-peer (or distributed) network environment, or as a server or client machine in a cloud computing infrastructure or environment.

The machine may be a Personal Computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Additionally, while a single machine is illustrated, the term "machine" shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.

Example computer system 900 includes a processing device 902, a main memory 904 (e.g., Read Only Memory (ROM), flash memory, Dynamic Random Access Memory (DRAM) such as synchronous DRAM (sdram) or Rambus DRAM (RDRAM)), a static memory 906 (e.g., flash memory, Static Random Access Memory (SRAM), etc.), and a data storage system 918, which communicate with each other via a bus 930.

Processing device 902 represents one or more general-purpose processing devices such as a microprocessor, central processing unit, or the like. More particularly, the processing device may be a Complex Instruction Set Computing (CISC) microprocessor, Reduced Instruction Set Computing (RISC) microprocessor, Very Long Instruction Word (VLIW) microprocessor, or processor implementing other instruction sets, or processors implementing a combination of instruction sets. The processing device 902 may also be one or more special-purpose processing devices such as an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), network processor, or the like. The processing device 902 is configured to execute the instructions 926 for performing the operations and steps discussed herein. Computer system 900 may further include a network interface device 908 to communicate over a network 920.

The data storage system 918 may include a machine-readable storage medium 924 (also referred to as a computer-readable medium) on which is stored one or more sets of instructions 926 or software embodying any one or more of the methodologies or functions described herein. The instructions 926 may also reside, completely or at least partially, within the main memory 904 and/or within the processing device 902 during execution thereof by the computer system 900, the main memory 904 and the processing device 902 also constituting machine-readable storage media. Machine-readable storage media 924, data storage system 918, and/or main memory 904 may correspond to memory subsystem 110 of fig. 1.

In one embodiment, instructions 926 include instructions to implement functionality corresponding to a read-ahead cache manager (e.g., read-ahead cache manager 113 of FIG. 1). While the machine-readable storage medium 924 is shown in an example embodiment to be a single medium, the term "machine-readable storage medium" should be taken to include a single medium or multiple media that store the one or more sets of instructions. The term "machine-readable storage medium" shall also be taken to include any medium that is capable of storing or encoding a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure. The term "machine-readable storage medium" shall accordingly be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media.

Some portions of the preceding detailed descriptions have been presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, considered to be a self-consistent sequence of operations leading to a desired result. The operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to such signals as bits, values, elements, symbols, characters, terms, numbers, or the like.

It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. The present disclosure may be directed to the actions and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage systems.

The present disclosure also relates to an apparatus for performing the operations herein. This apparatus may be specially constructed for the intended purposes, or it may comprise a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. For example, a computer system or other data processing system (e.g., controller 115) may perform the computer-implemented method 300 in response to its processor executing a computer program (e.g., a sequence of instructions) contained in a memory or other non-transitory machine-readable storage medium. Such a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), Random Access Memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, and each coupled to a computer system bus.

The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct a more specialized apparatus to perform the method. The structure for a variety of these systems will be presented as set forth in the description below. In addition, the present disclosure is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the disclosure as described herein.

The present disclosure may be provided as a computer program product or software which may include a machine-readable medium having stored thereon instructions which may be used to program a computer system (or other electronic devices) to perform a process according to the present disclosure. A machine-readable medium includes any mechanism for storing information in a form readable by a machine (e.g., a computer). In some embodiments, a machine-readable (e.g., computer-readable) medium includes a machine (e.g., computer) -readable storage medium, such as read only memory ("ROM"), random access memory ("RAM"), magnetic disk storage media, optical storage media, flash memory components, and so forth.

In the foregoing specification, embodiments of the disclosure have been described with reference to specific example embodiments thereof. It will be evident that various modifications may be made thereto without departing from the broader spirit and scope of embodiments of the disclosure as set forth in the following claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense.

26页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:用于核特定内存映射的装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类