Method and apparatus for efficiently tracking locations of dirty cache lines in a cache of secondary main memory

文档序号:1270905 发布日期:2020-08-25 浏览:25次 中文

阅读说明:本技术 高效跟踪脏高速缓存行在二级主存储器的高速缓存中的位置的方法和装置 (Method and apparatus for efficiently tracking locations of dirty cache lines in a cache of secondary main memory ) 是由 王哲 A·R·阿拉迈德恩 L·瓦内斯 A·M·鲁多夫 M·P·斯瓦米纳坦 于 2020-01-17 设计创作,主要内容包括:提供了包括持久性存储器和高速缓存的二级主存储器。通过使用脏高速缓存行跟踪器来跟踪脏高速缓存行在高速缓存中的位置。脏高速缓存行跟踪器存储在高速缓存中,并且可以被缓存到用于持久性存储器的存储器控制器中。脏高速缓存行跟踪器可以用于绕过高速缓存查找、执行高效的脏高速缓存行清理以及使电池电量与二级主存储器中的高速缓存的容量解耦。(A secondary main memory is provided that includes persistent memory and cache. The location of a dirty cache line in the cache is tracked by using a dirty cache line tracker. The dirty cache line tracker is stored in cache and may be cached in a memory controller for persistent memory. The dirty cache line tracker may be used to bypass cache lookups, perform efficient dirty cache line scrubbing, and decouple battery power from the capacity of the cache in secondary main memory.)

1. An apparatus, comprising:

a secondary system memory, the secondary system memory comprising:

a persistent memory; and

a volatile memory, the volatile memory comprising:

a cache comprising a plurality of cache lines, each cache line for storing a copy of data read from the persistent memory;

a dirty cache line tracker to store a plurality of dirty cache line entries, each dirty cache line entry to store N dirty bits, each dirty bit corresponding to one of N consecutive cache lines in the cache; and

a cache manager to read the N dirty bits in a dirty cache line entry to identify a location of a dirty cache line in the cache.

2. The apparatus of claim 1, wherein the cache manager is to write the dirty cache line associated with the dirty bit in the dirty cache line entry to the persistent memory.

3. The apparatus of claim 1, wherein the cache manager is to monitor a number of dirty lines in the dirty cache line tracker and write a dirty cache line to persistent memory when the number of dirty lines is greater than a threshold number of dirty lines.

4. The apparatus of claim 1, further comprising:

a volatile memory controller to store a dirty cache-line tracker cache to store a copy of dirty cache-line entries stored in the dirty cache-line tracker.

5. The apparatus of claim 4, wherein the volatile memory controller is to write data directly to a persistent memory address if a cache is saturated and a dirty bit corresponding to a cache line associated with the persistent memory address indicates that the cache line is clean.

6. The apparatus of claim 1, wherein the persistent memory is a three-dimensional cross-point memory.

7. The apparatus of claim 1, wherein the volatile memory is a dynamic random access memory.

8. A method, comprising:

storing a copy of data read from persistent memory in a secondary system memory in a cache, the secondary system memory including the persistent memory and volatile memory, the volatile memory including the cache, the cache including a plurality of cache lines, each cache line for storing a copy of data read from the persistent memory;

storing a plurality of dirty cache line entries in a dirty cache line tracker, each dirty cache line entry for storing N dirty bits, each dirty bit corresponding to one of N consecutive cache lines in the cache; and

reading, by a cache manager, the N dirty bits in a dirty cache line entry to identify a location of a dirty cache line in the cache.

9. The method of claim 8, wherein the cache manager is to write the dirty cache line associated with the dirty bit in the dirty cache line entry to the persistent memory.

10. The method of claim 8, wherein the cache manager is to monitor a number of dirty lines in the dirty cache line tracker and write dirty cache lines to persistent memory when the number of dirty lines is greater than a threshold number of dirty lines.

11. The method of claim 8, further comprising:

storing, by a volatile memory controller, a copy of a dirty cache line entry stored in the dirty cache line tracker in a dirty cache line tracker cache.

12. The method of claim 11, further comprising:

writing, by the volatile memory controller, data directly to a persistent memory address if the cache is saturated and a dirty bit corresponding to a cache line associated with the persistent memory address indicates that the cache line is clean.

13. The method of claim 11, wherein the persistent memory is a three-dimensional cross-point memory.

14. The method of claim 13, wherein the volatile memory is a dynamic random access memory.

15. At least one machine readable medium comprising a plurality of instructions that in response to being executed by a system, cause the system to carry out a method according to any one of claims 8 to 14.

16. An apparatus comprising means for performing the method of any of claims 8-14.

17. A system, comprising:

a processor; and

a secondary system memory communicatively coupled to the processor, the secondary system memory comprising:

a persistent memory; and

a volatile memory, the volatile memory comprising:

a cache comprising a plurality of cache lines, each cache line for storing a copy of data read from the persistent memory;

a dirty cache line tracker to store a plurality of dirty cache line entries, each dirty cache line entry to store N dirty bits, each dirty bit corresponding to one of N consecutive cache lines in the cache; and

a cache manager to read the N dirty bits in a dirty cache line entry to identify a location of a dirty cache line in the cache.

18. The system as in claim 17 wherein the cache manager is to write the dirty cache line associated with the dirty bit in the dirty cache line entry to the persistent memory.

19. The system of claim 17, wherein the cache manager is to monitor a number of dirty lines in the dirty cache line tracker and write a dirty cache line to persistent memory when the number of dirty lines is greater than a threshold number of dirty lines.

20. The system of claim 17, further comprising:

a volatile memory controller to store a dirty cache-line tracker cache to store a copy of dirty cache-line entries stored in the dirty cache-line tracker.

21. The system of claim 20, wherein the volatile memory controller is to write data directly to a persistent memory address if a cache is saturated and a dirty bit corresponding to a cache line associated with the persistent memory address indicates that the cache line is clean.

22. The system of claim 17, wherein the persistent memory is a three-dimensional cross-point memory.

Technical Field

The present disclosure relates to secondary main memory, and in particular to cache management in secondary main memory.

Background

The secondary main memory may include a first level including volatile memory and a second level including persistent memory. The second level is presented to the host operating system as "main memory," while the first level is a cache for the second level that is transparent to the host operating system. The first level may be a direct-mapped cache, where each cache line includes data, metadata, and an Error Correction Code (ECC). The metadata may include a dirty bit, a tag bit, and a status bit. If the minimum memory read granularity of the level two memory is a cache line, the data, metadata, and ECC are read to check the dirty bits of the cache line to determine if the cache line is clean or dirty.

Drawings

Features of embodiments of the claimed subject matter will become apparent as the following detailed description proceeds, and upon reference to the drawings, in which like numerals depict like parts, and in which:

FIG. 1 is a block diagram of a computer system including a level two main memory and a dirty cache line tracker to track locations of dirty cache lines in a cache (volatile memory) in a first level of the level two main memory;

FIG. 2 is a conceptual diagram of a cache line in a first level main memory of the second level main memory shown in FIG. 1;

FIG. 3A is a block diagram illustrating an embodiment of a dirty cache-line tracker to track dirty cache lines in the cache illustrated in FIG. 1 (a first level main memory of a second level main memory);

FIG. 3B is a block diagram illustrating an embodiment of a dirty cache line tracker cache in a volatile memory controller;

FIG. 4 is a block diagram illustrating the relationship between a dirty cache line tracker and cache lines in a cache;

FIG. 5 is a flow diagram illustrating the use of a dirty cache line tracker to bypass lookups to dirty bits in cache lines in a cache;

FIG. 6 is a flow diagram illustrating the use of a cached dirty cache line tracker to improve bandwidth utilization in level two memory; and

FIG. 7 is a flow diagram illustrating the use of a cached dirty cache line tracker to reduce battery capacity in a system with level two memory.

Although the following detailed description will proceed with reference being made to illustrative embodiments of the claimed subject matter, many alternatives, modifications, and variations thereof will be apparent to those skilled in the art. Accordingly, the claimed subject matter is intended to be viewed broadly and defined only as set forth in the hereinafter appended claims.

Detailed Description

Reading an entire cache line to check for a bit in the cache line wastes both memory bandwidth and system power. The location of a dirty cache line in the cache is tracked by using a dirty cache line tracker, as opposed to reading the entire cache line to check the state of the dirty bit for the cache line in the first level of the second level main memory. The dirty cache line tracker is stored in a first level memory of the second level main memory and is cached in a memory controller for the first level memory. The dirty cache line tracker may be used to bypass cache lookups, perform efficient dirty cache line scrubbing, and decouple battery power from the capacity of the first level of secondary main memory.

Various embodiments and aspects of the inventions will be described with reference to details discussed below, and the accompanying drawings will illustrate the various embodiments. The following description and drawings are illustrative of the invention and are not to be construed as limiting the invention. Numerous specific details are described to provide a thorough understanding of various embodiments of the invention. However, in certain instances, well-known or conventional details are not described in order to provide a concise discussion of embodiments of the present inventions.

Reference in the specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the invention. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment.

FIG. 1 is a block diagram of a computer system 100, the computer system 100 including a level two main memory and a dirty cache line tracker to track locations of dirty cache lines in a cache (volatile memory) in a first level of the level two main memory. Computer system 100 may correspond to a computing device including, but not limited to, a server, a workstation computer, a desktop computer, a laptop computer, and/or a tablet computer.

Computer system 100 includes a system-on-chip (SOC or SOC)104 that combines a processor, graphics, memory, and input/output (I/O) control logic into one SOC package. SoC104 includes at least one Central Processing Unit (CPU) module 108, a volatile memory controller 114, and a Graphics Processor Unit (GPU) 110. In other embodiments, the volatile memory controller 114 may be external to the SoC 104. Although not shown, each of processor core(s) 102 may include internally one or more instruction/data caches, execution units, prefetch buffers, instruction queues, branch address calculation units, instruction decoders, floating point units, retirement units, and the like. According to one embodiment, the CPU module 108 may correspond to a single core or dual core general purpose processor, for example, a processor such as a microprocessorThose provided by the company. In other embodiments, the CPU module 108 may correspond to a multi-core or many core processors having more than two cores.

The secondary main memory includes cache 136 (first level main memory, which may also be referred to as "near" memory) and persistent memory 132 (second level main memory, which may also be referred to as "far" memory) in volatile memory 126. The cache 136 caches data stored in the persistent memory 132 into cache lines. A cache line stored in cache 136 is clean if the data in the cache line has not been modified after being copied from persistent storage 132. A cache line stored in cache 136 is dirty if the data in the cache line has been written to after being copied from persistent memory 132. The persistent memory 132 is communicatively coupled to a persistent memory controller 138, the persistent memory controller 138 being communicatively coupled to the CPU module 108 in the SoC 104. Persistent memory 132 may be included on a memory module such as a dual in-line memory module (DIMM), which may be referred to as a non-volatile dual in-line memory module (NVDIMM).

A dirty cache line tracker ("DCT") 150 in cache 136 in volatile memory 126 is used to track the location of dirty cache lines in cache 136. A dirty cache line tracker cache 152 in the volatile memory controller 114 may be used to cache the dirty cache line tracker. In addition to the cache 136, applications 130, an Operating System (OS)142, and a cache manager 134 may be stored in the volatile memory 126.

Persistent storage 132 is non-volatile memory. A non-volatile memory (NVM) device is a memory that: the state of the memory is determined even if power to the device is interrupted. NVM devices may include block addressable memory devices (e.g., NAND technology), or more particularly, two threshold level NAND flash memory (e.g., single level cell ("SLC"), two level cell ("MLC"), four level cell ("QLC"), three level cell ("TLC"), or some other NAND). The NVM devices may also include byte-addressable write-in-place three-dimensional cross-point memory devices, or other byte-addressable write-in-place memories (also referred to as persistent memories), such as single-or two-level Phase Change Memory (PCM) or phase change memory with Switches (PCMs), NVM devices using chalcogenide phase change materials (e.g., chalcogenide glass), resistive memories (including metal oxide-based, oxygen vacancy-based, and conductive bridge random access memory (CB-RAM)), nanowire memories, ferroelectric random access memory (FeRAM, FRAM), Magnetoresistive Random Access Memory (MRAM) incorporating memristor technology, spin-torque (STT) -MRAM, spintronic magnetic junction memory-based devices, Magnetic Tunnel Junction (MTJ) based devices, DW (domain wall) and SOT (spin-orbit transfer) based devices, A thyristor-based memory device, or a combination of any of the above devices or other memory.

Cache 136 is a volatile memory. Volatile memory is memory that: the state of the memory (and thus the data stored in the memory) is indeterminate if power to the device is interrupted. Dynamic volatile memories require refreshing of data stored in the device to maintain state. One example of dynamic volatile memory includes DRAM (dynamic random access memory) or some variant such as synchronous DRAM (sdram). The memory subsystem as described herein may be compatible with a variety of memory technologies: for example, DDR3 (double data Rate version 3, originally published by JEDEC (Joint electronic device engineering Commission) at 27.6.2007), DDR4(DDR version 4, original Specification published by JEDEC at 9.2012), DDR4E (DDR version 4), LPDDR3 (Low Power DDR version 3, JESD209-3B, published by JEDEC at 8.2013), DR LPDDR4(LPDDR version 4, JESD209-4, originally published by JEDEC at 8.2014), WIO2 (wide input/output version 2, JESD229-2, originally published by JEDEC in month 8 2014), HBM (high bandwidth memory, JESD325, originally published by JEDEC in month 10 2013), DDR5(DDR version 5, currently discussed by JEDEC), LPDDR5 (currently discussed by JEDEC), HBM2(HBM version 2, currently discussed by JEDEC), or other memory technologies or combinations of memory technologies, as well as technologies based on derivatives or extensions of these specifications. JEDEC standards are available at www.jedec.org.

A Graphics Processor Unit (GPU)110 may include one or more GPU cores and a GPU cache that may store graphics-related data for the GPU cores. The GPU core may internally include one or more execution units and one or more instruction caches and data caches. Additionally, Graphics Processor Unit (GPU)110 may contain other graphics logic units not shown in fig. 1, such as one or more vertex processing units, rasterization units, media processing units, and codecs.

Within I/O subsystem 112, there are one or more I/O adapters 116 to convert the host communication protocols used within processor core(s) 102 to protocols compatible with the particular I/O device. Some of the protocols that an adapter may use to translate include Peripheral Component Interconnect (PCI) express (PCIe); universal Serial Bus (USB); serial Advanced Technology Attachment (SATA) and Institute of Electrical and Electronics Engineers (IEEE)1594 "Firewire".

The I/O adapter(s) 116 can communicate with external I/O devices 124, which external I/O devices 124 can include, for example, user interface device(s) including a display and/or touch screen display 140, a printer, a keypad, a keyboard, wired and/or wireless communication logic, and storage device(s) including a hard disk drive ("HDD"), a solid state drive ("SSD") 118, a removable storage medium, a Digital Video Disk (DVD) drive, a Compact Disk (CD) drive, a Redundant Array of Independent Disks (RAID), a tape drive, or other storage device. Storage devices may be communicatively and/or physically coupled together over one or more buses using one or more of a variety of protocols, including but not limited to SAS (serial attached SCSI (small computer system interface)), PCIe (peripheral component interconnect express), NVMe (non-volatile memory express) over PCIe (peripheral component interconnect express), and SATA (serial ATA (advanced technology attachment)).

Additionally, one or more wireless protocol I/O adapters may be present. Examples of wireless protocols are for personal area networks (e.g., IEEE 802.15 and bluetooth 4.0); wireless local area networks (e.g., IEEE802.11 based wireless protocols); as well as cellular protocols, among others.

Operating System (OS)142 is software that manages computer hardware and software, including memory allocation and access to I/O devices. Examples of operating systems include And

fig. 2 is a conceptual diagram of a cache line in the cache 136 of the level two main memory shown in fig. 1. Cache 136 may be coupled to SoC104 via high bandwidth, low latency modules for efficient processing. The persistent memory 132 of the secondary main memory may be coupled to the SoC104 via a low bandwidth, high latency module (as compared to the modules of the cache 136).

In an embodiment, the cache 136 is a synchronous dynamic random access memory (e.g., JEDEC DDR SDRAM), and the persistent memory 132 is a three-dimensional crosspoint memory device (e.g.,3D XPointTMa technique). Cache 136 (the first level memory of the second level main memory) is organized as a direct mapped cache. Data is transferred between persistent storage 132 (the second level of the level two main memory) and cache 136 in fixed-size blocks, referred to as cache lines or cache blocks. Cache line 200 includes data 202 as well as metadata and an Error Correction Code (ECC) 204. Metadata and Error Correction Code (ECC)204 includes ECC 214, tag 206, valid bit 208, and dirty bit 210. When data is copied from persistent memory 132 into a cache line in cache 136, the requested memory location (the address of the data stored in persistent memory 132) is stored in tag field 206 and the data is stored in data field 202 in cache line 200.

In an embodiment, cache 136 includes nine memory chips, where data for a cache line is stored across eight of the nine memory chips, and metadata and ECC for the cache line are stored in one of the nine memory chips. Nine memory chips may be located on a dual in-line memory module (DIMM). Each cache line (which may also be referred to as a cache block) is 64 bytes, with each of the eight memory chips being used to store 8 bytes of the 64 byte cache line. Each 64 byte cache line has 8 bytes of metadata and ECC. As shown in fig. 2, the tag 206 and metadata, including dirty bits 210, for each cache block are stored in the ECC chip.

FIG. 3A is a block diagram illustrating an embodiment of a dirty cache-line tracker 150, the dirty cache-line tracker 150 to track dirty cache lines in the cache 136 (first level main memory of the second level main memory) illustrated in FIG. 1.

Dirty cache line tracker 150 includes a plurality of dirty cache line entries 302 to quickly and efficiently track the location of dirty cache line 200 in cache 136. As discussed in connection with fig. 1, dirty cache line tracker 150 is stored in cache 136.

FIG. 3B is a block diagram illustrating an embodiment of a dirty cache line tracker cache 152 in the volatile memory controller 114. Dirty cache line tracker cache 152 includes a plurality of dirty cache line tracker entries 312.

FIG. 4 is a block diagram illustrating the relationship between dirty cache line tracker 300 and cache lines 200 in cache 136. In an embodiment, each dirty cache line entry 302 in dirty cache line tracker 150 may store 512 dirty bits, each of the 512 dirty bits corresponding to one of 512 consecutive cache lines 200 in cache 136. There is one dirty cache line entry in cache 150 for 8 consecutive banks 400, where each cache bank 402 in consecutive banks 400 has 64 cache lines 200. Dirty cache line entry 302 includes dirty bit vector 304, valid (V) bits 306, and ECC 310. The state of the valid bit 306 indicates whether the data stored in the dirty bit vector 304 is valid. Each bit in the dirty bit vector 304 corresponds to one of the cache lines 200 in eight consecutive cache lines 400 in the cache 136. The state of the dirty bit in the dirty bit vector 304 indicates whether the corresponding cache line 200 in the cache 136 is dirty or clean.

In an embodiment, dirty cache line tracker 150 is a Static Random Access Memory (SRAM). Each time the state of a dirty bit in the dirty bit vector 304 is changed to indicate that the corresponding cache line is dirty (either from a logical "1" to a logical "0" or from a logical "0" to a logical "1"), i.e., when data is written to a cache line 200 in the cache 136, the corresponding dirty bit in the dirty bit vector 304 in the dirty cache line tracker 150 is updated to reflect the change. In an embodiment, dirty cache line tracker 150 in cache 136 is cached in dirty cache line tracker cache 152 in volatile memory controller 114. Dirty cache line tracker entry 312 in dirty cache line tracker cache 152 includes dirty bit vector 304, valid (V) bit 306, tag 308, and ECC 310. Tag 308 stores the first bank address in a consecutive bank in cache 136 corresponding to a dirty bit in dirty bit vector 304.

In an embodiment where cache 136 is 512 Gigabytes (GB), 1GB of the 512GB is allocated to dirty cache line tracker 150 (i.e., 0.02% of cache 136). The dirty bit in dirty bit vector 304 in dirty cache line entry 302 in dirty cache line tracker 150 indicates whether cache line 200 in corresponding row 402 is dirty or not. Dirty cache line entry 302 is cached in dirty bit tracker cache 152 in volatile memory controller 114. In an embodiment, dirty bit tracker cache 152 is a set associative cache.

In embodiments where 0.02% of the cache 136 is allocated to the dirty cache-line tracker 150, the data in the persistent memory 132 that is mapped to the dirty cache-line tracker 150 is remapped to other locations in the cache 136 by changing the state of the most significant bit in the tag 308. Fig. 5 is a flow diagram illustrating the use of a dirty cache line tracker to bypass lookups to dirty bits in cache lines in cache 136.

At block 500, if a write request is received, processing continues to block 502.

At block 502, if an insert request is received, processing continues to block 504.

At block 504, the dirty cache line entry 302 in the dirty bit tracker cache 152 in the volatile memory controller 114 is read to determine the state of the dirty bit associated with the cache line 200 to be written. When data is written to a cache line 200 in cache 136, where dirty bit 210 is a logical "0," indicating that the cache line is clean, i.e., not modified (written to) after being copied from persistent memory 132, dirty bit 210 of the corresponding cache line 200 block changes from a logical "0" (clean) to a logical "1" (dirty). After writing data to cache line 200, if a dirty bit associated with the cache line is located in dirty cache line entry 302 in dirty bit tracker cache 152, the dirty bit is changed from a logical "0" (clean) to a logical "1" (dirty). If the dirty bit associated with the cache line is not located in a dirty cache line entry 302 in the dirty bit tracker cache 152, the dirty cache line entry 302 is fetched from the dirty cache line tracker 150, inserted into the dirty bit tracker cache 152, and the dirty bit is changed from a logical "0" (clean) to a logical "1" (dirty).

At block 506, a dirty cache line is selected in the cache to be evicted from cache 136 in response to a request to insert the cache line in cache 136. Dirty cache line tracker cache 152 is accessed to select a dirty cache line based on the state of a dirty bit stored in dirty cache line tracker cache 152. The modified data stored in the dirty cache line is written back to persistent memory 132, and the dirty bit 210 of the corresponding cache line 200 changes from a logical "1" (dirty) to a logical "0" (clean), and the dirty bit in dirty cache line tracker cache 152 corresponding to the evicted cache line 200 changes from a logical "1" (dirty) to a logical "0" (clean).

FIG. 6 is a flow diagram illustrating the use of a cached dirty cache line tracker 152 to improve bandwidth utilization in level two memory.

At block 600, persistent memory bandwidth utilization and cache bandwidth utilization are monitored. In an embodiment, bandwidth utilization is monitored by counting the number of requests in the read queue and the write queue. Overall bandwidth utilization of secondary memory can be improved if both persistent memory bandwidth and cache memory bandwidth can be efficiently utilized.

At block 602, if there are no requests in the read queue and the write queue, then the persistent memory is free. If the persistent memory is free, processing continues to block 604. If the persistent memory is not free, processing continues to block 600.

At block 604, if the cache is bandwidth saturated and the persistent memory is free, processing continues to block 606. If the cache is not bandwidth saturated and the persistent memory is free, processing continues to block 608.

At block 606, the cache is bandwidth saturated and the persistent memory is free, and a request to read the cache may be redirected from the volatile memory controller 114 to the persistent memory controller 138 for the persistent memory 132 for servicing depending on the state of the dirty bit associated with the cache line (in which the data is stored) in the cache. Whether a read request to the cache is for unmodified data stored in the cache may be determined simply from the state of a dirty bit in the dirty cache line tracker cache and may be serviced by the persistent memory. Requests that are only used to read data stored in a cache line in the cache with a dirty bit of "0" (which may be referred to as "clean data requests") may be redirected to persistent memory. If the request is for reading data stored in a cache line in the cache with a dirty bit of "1" (which may be referred to as a "dirty data request"), then the access may not be redirected to persistent memory because the data stored in the persistent data is stale data.

At block 608, when the persistent memory is free and the cache is not saturated, the modified data in the cache line in the cache may be written back to the persistent memory. The cache line storing the modified data (also referred to as "stale data") to be written back to persistent memory may be simply determined from the state of a dirty bit in the dirty cache line tracker cache. Writing modified data (also referred to as "dirty cache lines" or "dirty blocks") while the persistent memory is free reduces the time to evict a cache line in the cache because stale data has already been written back to the persistent memory.

FIG. 7 is a flow diagram illustrating the use of a dirty cache line tracker cache 152 to reduce battery capacity in a system with level two memory.

Cache 136 is volatile memory and may include battery-powered Dynamic Random Access Memory (DRAM). The "battery capacity" (a measure of the amount of power stored by the battery) is selected to ensure that all data in the cache 136 can be flushed to the persistent memory 132 in the event of a loss of power to the system. Without any optimization, battery capacity is needed to ensure that the system can operate for a sufficient period of time after a power loss event to write all data stored in the cache 136 to the persistent memory 132.

At block 700, a count of dirty blocks is maintained in the system to track the number of cache lines in cache 136 that are dirty. In a system including a battery with a fixed capacity, the number of dirty cache lines that may be written back to persistent memory 132 when power is provided to the system by the battery represents a threshold (or dirty cache line budget) for the number of cache lines that may be modified at any point in time.

At block 702, if the number of dirty cache lines in cache 136 is greater than the dirty cache line budget, processing continues to block 704.

At block 704, data stored in the dirty cache line in the cache is written back to the persistent memory 132 until the count of dirty blocks is below the dirty cache line budget.

In secondary main memory that uses dirty cache line tracker 150 to track dirty cache lines in cache 136, in one embodiment, battery capacity is only needed to ensure that 25% of the cache contents can be flushed to persistent memory 132 after a power failure. This significantly reduces the battery cost (based on battery capacity) of the secondary main memory.

The flow diagrams illustrated herein provide examples of sequences of various process actions. The flow diagrams may indicate operations to be performed by software or firmware routines and physical operations. In one embodiment, the flow diagram may illustrate the state of a Finite State Machine (FSM), which may be implemented in hardware and/or software. Although shown in a particular sequence or order, the order of the acts may be modified unless otherwise indicated. Thus, the illustrated embodiments should be understood only as examples, and the processes may be performed in a different order, and some actions may be performed in parallel. Additionally, in various embodiments, one or more acts may be omitted; thus, not all acts may be required in every embodiment. Other process flows are also possible.

With respect to various operations or functions described herein, the various operations or functions may be described or defined as software code, instructions, configurations, and/or data. The content may be directly executable ("object" or "executable" form), source code, or difference code ("delta" or "patch" code). The software content of the embodiments described herein may be provided via an article of manufacture having the content stored thereon, or via a method of operating a communication interface to transmit data via the communication interface. A machine-readable storage medium may cause a machine to perform the functions or operations described, and includes any mechanism for storing information in a form accessible by a machine (e.g., a computing device, an electronic system, etc.), such as recordable/non-recordable media (e.g., Read Only Memory (ROM), Random Access Memory (RAM), magnetic disk storage media, optical storage media, flash memory devices, etc.). A communication interface includes any mechanism that interfaces with any of a hardwired, wireless, optical, etc. medium to communicate with another device, such as a memory bus interface, a processor bus interface, an internet connection, a disk controller, etc. The communication interface may be configured by providing configuration parameters and/or transmitting signals to prepare the communication interface to provide data signals describing the software content. The communication interface may be accessed via one or more commands or signals sent to the communication interface.

The various components described herein may be modules for performing the described operations or functions. Each component described herein includes software, hardware, or a combination of these. A component may be implemented as a software module, a hardware module, special-purpose hardware (e.g., application specific hardware, Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), etc.), embedded controllers, hardwired circuitry, etc.

In addition to those described herein, various modifications may be made to the disclosed embodiments and implementations of the invention without departing from their scope.

16页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种基于SSD的日志数据保存方法、装置、设备和介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类