Apparatus and method for data management in virtualized very large scale environments

文档序号:1270528 发布日期:2020-08-25 浏览:2次 中文

阅读说明:本技术 虚拟化超大规模环境中的数据管理的装置和方法 (Apparatus and method for data management in virtualized very large scale environments ) 是由 M.埃瓦斯特希 R.布伦南 于 2015-08-19 设计创作,主要内容包括:根据一个通用方案,可以配置存储器管理单元(MMU),以与包括多种类型的储存介质的异构存储系统接口连接。每种类型的储存介质都可以基于相应存储技术,并且可以与(各)性能特性关联。MMU可以接收对异构存储系统的数据访问。MMU可以确定异构存储系统的储存介质,以提供数据访问。根据与目标储存介质关联的至少一个性能特性和与虚拟机关联并且指出一个或者多个性能特性的服务质量标帜,可以选择目标储存介质。MMU可以利用虚拟机使数据访问路由到至少一个储存介质。(According to one general approach, a Memory Management Unit (MMU) may be configured to interface with heterogeneous storage systems that include multiple types of storage media. Each type of storage medium may be based on a respective storage technology and may be associated with performance characteristic(s). The MMU may receive data accesses to the heterogeneous memory system. The MMU may determine a storage medium of the heterogeneous storage system to provide data access. The target storage medium may be selected based on at least one performance characteristic associated with the target storage medium and a quality of service flag associated with the virtual machine and indicating the one or more performance characteristics. The MMU may utilize a virtual machine to route data accesses to the at least one storage medium.)

1. An apparatus, comprising:

a memory management unit configured to:

interfacing with a heterogeneous storage system comprising a plurality of types of storage media, wherein each type of storage media is based on a respective storage technology and is associated with one or more performance characteristics;

receiving data access to the heterogeneous storage system from a virtual machine;

determining the storage media of at least one of the heterogeneous storage systems to provide the data access, wherein the target storage media is selected based at least in part on at least one performance characteristic associated with the target storage media and a quality of service flag associated with the virtual machine and indicating one or more performance characteristics; and

the virtual machine routes the data access to the at least one storage medium.

2. The apparatus of claim 1, wherein the memory management unit is configured to move data associated with the virtual machine from a first storage medium to a second storage medium in response to a triggering event.

3. The apparatus of claim 2, wherein the triggering event comprises not accessing the data for a predetermined period of time.

4. The apparatus of claim 2, wherein the triggering event comprises relaxing one or more of the performance characteristics guaranteed by the virtual machine.

5. The apparatus of claim 1, wherein the quality of service flag comprises at least two parts:

wherein a first portion of the quality of service flag indicates guaranteed performance characteristics of the virtual machine; and is

Wherein a second portion of the quality of service flag indicates a range of values of the performance characteristic guaranteed by the virtual machine.

6. The apparatus of claim 1, wherein the memory management unit is configured to:

maintaining a count of the number of allocable storage spaces associated with each storage medium; and is

The virtual machine routes the data access to at least one of the storage media based at least in part on the amount of allocable storage space associated with each respective storage medium and the quality of service flag.

7. The apparatus of claim 6, wherein the memory management unit is configured to distribute data associated with the virtual machine between two or more of the storage media.

8. The apparatus of claim 6, wherein the memory management unit is configured to allocate memory pages of the virtual machine between two or more storage devices, wherein the two or more storage devices share a same physical address space.

9. The apparatus of claim 1, wherein the virtual machine is configured to execute a plurality of applications; and is

Wherein each said application is associated with a plurality of quality of service flags indicating one or more performance characteristics guaranteed by said virtual machine.

10. The apparatus of claim 1, wherein the heterogeneous storage system includes both volatile and non-volatile storage media.

11. A method, comprising:

receiving data access to the heterogeneous storage system from a virtual machine executing on a processor,

wherein the heterogeneous storage system comprises a plurality of types of storage media, wherein each type of storage media is based on a respective storage technology and is associated with one or more performance characteristics;

a memory management unit determines a target storage medium of the heterogeneous storage system for the data access based at least in part on at least one performance characteristic associated with the target storage medium and a quality of service flag associated with the virtual machine and indicating one or more performance characteristics guaranteed by the virtual machine; and

the memory management unit causes the data access to be routed at least partially between the processor and the target storage medium.

12. The method of claim 11, further comprising, in response to a triggering event, moving data associated with the virtual machine from a first storage medium to a second storage medium.

13. The method of claim 12, wherein the triggering event comprises not accessing the data for a predetermined period of time.

14. The method of claim 12, wherein the triggering event comprises relaxing one or more of the performance characteristics guaranteed by the virtual machine.

15. The method of claim 11, wherein the quality of service flag comprises at least two parts:

wherein a first portion of the quality of service flag indicates guaranteed performance characteristics of the virtual machine; and is

Wherein a second portion of the quality of service flag indicates a range of values of the performance characteristic guaranteed by the virtual machine.

16. The method of claim 11, wherein determining a target storage medium comprises:

maintaining a count of the number of allocable storage spaces associated with each storage medium; and is

Selecting a target storage medium based at least in part on the amount of allocable storage space associated with each respective storage medium and the quality of service flag.

17. The method of claim 11, wherein the virtual machine is configured to execute a plurality of applications; wherein each said application is associated with a plurality of quality of service flags indicating one or more performance characteristics guaranteed by said virtual machine; and is

Wherein determining the target storage medium comprises determining which application executed is associated with the data access.

18. An apparatus, comprising:

a processing side interface configured to receive data access of a storage system;

a memory router configured to:

determining whether the memory access is targeted to a heterogeneous storage system comprising a plurality of types of storage media, wherein each type of storage media is based on a respective storage technology and is associated with one or more performance characteristics; and is

If the memory access is targeted to a heterogeneous storage system, selecting a target storage medium of the heterogeneous storage system for the data access based at least in part on at least one performance characteristic associated with the target storage medium and a quality of service flag associated with the data access and indicating one or more performance characteristics; and

a heterogeneous storage system interface configured to route the data access at least partially to the target storage medium if the target of the memory access is a heterogeneous storage system.

19. The apparatus of claim 18, wherein the memory router is configured to move data associated with the virtual machine from a first storage medium to a second storage medium in response to a triggering event.

20. The apparatus of claim 18, wherein the memory router is configured to:

maintaining a count of the number of allocable storage spaces associated with each storage medium; and is

Selecting a target storage medium based at least in part on the amount of allocable storage space associated with each respective storage medium and the quality of service flag;

such that a less preferred storage medium is selected as the target storage medium when the preferred storage medium is below a threshold level of allocable storage space.

Technical Field

This description relates to data storage, and more particularly to storage of data in heterogeneous storage systems.

Background

The term memory hierarchy (hierarchy) is commonly used in computer architectures when discussing performance issues in the design of computer architectures. Traditionally, "memory hierarchies" in the computer storage context utilize response times to distinguish between each level in a "hierarchy. Since response time, complexity, and capacity are often related, the stages (e.g., transistor memory, electrically erasable programmable read-only memory, magnetic memory, optical memory, etc.) may also be distinguished using control techniques.

Traditionally, computing devices have had several general levels in the memory hierarchy. The fastest first stages are the processor's registers and an instruction/data cache (traditionally constructed from Static Random Access Memory (SRAM)) near the execution unit. The second, next fastest level may be a unified instruction and data cache that is significantly larger in size than the preceding level caches. This stage is typically shared between one or more CPUs and other execution units or processing units, such as Graphics Processing Units (GPUs), Digital Signal Processing (DSPs), and the like. External integrated circuits, some or all of the main memory, traditionally comprised of dynamic ram (dram), or system memory may be used as cache. The next level of the memory hierarchy is often much slower than the previous level. It typically includes magnetic or solid-state memory (e.g., hard disk or NAND flash technology, etc.) and is referred to as "secondary storage. The next level is slowest and traditionally includes large capacity media (e.g., compact discs, tape backups, etc.).

Disclosure of Invention

According to one general aspect, an apparatus may include a memory management unit. The memory management unit may be configured to interface with heterogeneous storage systems that include multiple types of storage media. Each type of storage medium may be based on a respective storage technology and may be associated with one or more performance characteristics. The memory management unit may be configured to receive data access to the heterogeneous storage system from the virtual machine. The memory management unit may also be configured to determine at least one of the storage media of the heterogeneous storage system to provide data access. The target storage medium may be selected based at least in part on at least one performance characteristic associated with the target storage medium and a quality of service flag associated with the virtual machine and indicating the one or more performance characteristics. The memory management unit may be configured to route data access to the at least one storage medium using the virtual machine.

According to another general aspect, a method may include receiving data access to a heterogeneous storage system from a virtual machine executed by a processor. A heterogeneous storage system may include multiple types of storage media, each type of storage media being based on a respective storage technology and associated with one or more performance characteristics. The method may also include the memory management unit determining a target storage medium of the heterogeneous storage system for data access based at least in part on the at least one performance characteristic associated with the target storage medium and a quality of service flag associated with the virtual machine and indicating one or more performance characteristics guaranteed by the virtual machine. The method may also include the memory management unit causing data access to be routed at least partially between the processor and the target storage medium.

According to another general aspect, an apparatus may include a processing side interface configured to receive data access by a storage system. The apparatus may include a memory router configured to determine whether a memory access is targeted to a heterogeneous storage system comprising a plurality of types of storage media, wherein each type of storage media is based on a respective storage technology and is associated with one or more performance characteristics; and if the memory access is targeted to the heterogeneous storage system, selecting a target storage medium of the heterogeneous storage system for the data access based at least in part on at least one performance characteristic associated with the target storage medium and a quality of service flag associated with the data access and indicating the one or more performance characteristics. The apparatus may also include a heterogeneous storage system interface configured to route the data access at least partially to the target storage medium if the target of the memory access is a heterogeneous storage system.

The details of one or more implementations are set forth in the accompanying drawings and the description below. Other features will be apparent from the description and drawings, and from the claims.

The claims set forth more fully a system and/or method for data storage, and more particularly for storing data in a heterogeneous storage system, as illustrated and described in connection with at least one of the figures.

Drawings

Fig. 1 is a block diagram of an exemplary embodiment of a system in accordance with the disclosed subject matter.

Fig. 2 is a block diagram of an exemplary embodiment of an apparatus according to the disclosed subject matter.

Fig. 3a is a block diagram of an exemplary embodiment of a system according to the disclosed subject matter.

Fig. 3b is a block diagram of an exemplary embodiment of a system according to the disclosed subject matter.

Fig. 3c is a block diagram of an exemplary embodiment of a system according to the disclosed subject matter.

Fig. 4 is a block diagram of an exemplary embodiment of an apparatus according to the disclosed subject matter.

Fig. 5 is a flow chart of an exemplary embodiment of a technique in accordance with the disclosed subject matter.

Fig. 6a is a block diagram of an exemplary embodiment of a system in accordance with the disclosed subject matter.

Fig. 6b is a block diagram of an exemplary embodiment of a system according to the disclosed subject matter.

Fig. 7 is a block diagram of an exemplary embodiment of a system in accordance with the disclosed subject matter.

Fig. 8 is a flow chart of an exemplary embodiment of a technique in accordance with the disclosed subject matter.

FIG. 9 is a schematic block diagram of an information handling system that may include devices formed in accordance with the principles of the disclosed subject matter.

Like reference symbols in the various drawings indicate like elements.

Detailed Description

Various exemplary embodiments will now be described more fully hereinafter with reference to the accompanying drawings, in which some exemplary embodiments are shown. The subject matter of the present disclosure may, however, be embodied in many different forms and should not be construed as limited to the exemplary embodiments set forth herein. Rather, these exemplary embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the subject matter of the disclosure to those skilled in the art. In the drawings, the size and relative sizes of layers and regions may be exaggerated for clarity.

It will be understood that when an element or layer is referred to as being "on" another element or layer, it can be directly on, connected or coupled to the other element or layer, or intervening elements may also be present. In contrast, when an element is referred to as being "directly on," "directly connected to" or "directly coupled to" another element or layer, there are no intervening elements or layers present. Like numbers refer to like elements throughout. The term "and/or" as used herein includes any and all combinations of one or more of the associated listed items.

It will be understood that, although the terms first, second, third, etc. may be used herein to describe various elements, components, regions, layers and/or sections, these elements, components, regions, layers and/or sections should not be limited by these terms. These terms are used to distinguish one element, component, region, layer or section from another element, component, region, layer or section. Thus, a first element, component, region, layer or section discussed below could be termed a second element, component, region, layer or section without departing from the teachings of the presently disclosed subject matter.

For convenience in description, spatial relational terms such as "below," "lower," "below," "over," "upper," and the like may be used herein to describe one element or feature's relationship to another element(s) or feature(s), as illustrated. It will be understood that the spatially relative terms are intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. For example, if the device in the figures is turned over, elements described as "below" or "beneath" another element or feature would then be oriented "above" the other element or feature. Thus, the exemplary term "below" can encompass both an orientation of above and below. The orientation of the device may be otherwise determined (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein interpreted accordingly.

The terminology used herein is for the purpose of describing particular example embodiments only and is not intended to be limiting of the subject matter of the present disclosure. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.

Exemplary embodiments are described herein with reference to cross-sectional views, which are schematic views of idealized exemplary embodiments (and intermediate structures). Strictly speaking, deviations from the shape of the figures are to be expected due to, for example, manufacturing techniques and/or tolerances. Thus, example embodiments should not be construed as limited to the particular shapes of regions illustrated herein but are to include deviations in shapes that result, for example, from manufacturing. For example, an implanted region shown as a rectangle typically has rounded or curved features and/or a gradient implant density at its edges, rather than a two-state change from implanted to non-implanted region. Also, the buried region formed by the implant may produce some implantation in the region between the buried region and the surface where the implant is made. Thus, the regions illustrated in this figure are schematic in nature and their shapes are not intended to illustrate the actual shape of a region of a device and are not intended to limit the scope of the presently disclosed subject matter.

Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which the presently disclosed subject matter belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.

Exemplary embodiments will be explained in detail below with reference to the accompanying drawings.

Fig. 1 is a block diagram of an exemplary embodiment of a system 100 in accordance with the disclosed subject matter. In the illustrated embodiment, a mechanism is shown for organizing and manipulating computing systems having various storage and/or storage technologies (e.g., DRAM, NAND, hard disk, etc.).

In various embodiments, system 100 may include: a processor 102, a memory controller, a switch or interconnect 104, and a heterogeneous memory system 106. In various embodiments, heterogeneous storage system 106 may include a plurality of different storage media (e.g., storage media 116, 126, 136, 146, etc.). In such embodiments, the heterogeneous storage system 106 may include different types of storage media based on various storage technologies. In some embodiments, these techniques may include, but are not limited to, for example: DRAM, phase change ram (pram), NAND or flash memory (e.g., SSD, etc.), resistive ram (rram), magnetoresistive ram (mram), magnetic memory (e.g., HDD, etc.), and the like. It should be appreciated that the above are merely a few illustrative examples and that the disclosed subject matter is not so limited.

Each storage/storage technology may have different power, speed, throughput, capacity, and/or cost characteristics. More generally, these characteristics may be referred to as "performance characteristics". Because of these different performance characteristics, storage media employing different storage technologies are traditionally categorized in such systems. For example, the processor 102 accesses fast but volatile memory (e.g., DRAM, etc.) via a first protocol and a first chipset component or circuit (e.g., an integrated Memory Controller (MCH), a northbridge of a chipset, etc.). In contrast, the processor 102 accesses slower but non-volatile memory (e.g., HDD, SSD, etc.) via a second protocol and possibly via a second chipset component or circuit (e.g., input/output (I/O) controller hub (ICH), southbridge of the chipset, etc.). The use of specific protocols and specialized circuitry makes it difficult to change storage technologies in a system (e.g., because a change requires the use of one technology instead of another, etc.). In the illustrated embodiment, the heterogeneous storage system 106 and the memory interconnect 104 allow for various storage technologies to be employed within the system 100.

In the illustrated embodiment, the system 100 includes a processor 102. Processor 102, in turn, may include a main Central Processing Unit (CPU)190 or multiple CPU cores. In various embodiments, CPU 190 may be configured to execute software programs that, in turn, access and manipulate data (e.g., data 194, etc.). In some embodiments, the processor 102 may include a cache hierarchy 192, the cache hierarchy 192 forming a first level of a memory hierarchy of the system 100. In various embodiments, cache hierarchy 192 may include SRAMs arranged in multiple levels (e.g., level 0(L0), level 1(L1), level 2(L2), etc.).

When processor 102 cannot access desired data 194 in cache hierarchy 192, processor 190 may attempt to access data 194 (e.g., read data, write data, etc.) through another level of the memory hierarchy (e.g., in main memory, a hard drive, etc.). In the illustrated embodiment, the processor 102 may include a memory input/output (I/O) interface 190, the memory input/output (I/O) interface 190 being configured to access one or more levels of a memory hierarchy located outside of the processor 102.

Further, in various embodiments, the processor 102 may include a memory input/output (I/O) interface 193, the memory input/output (I/O) interface 193 configured to communicate with memory. In the illustrated embodiment, the memory I/O interface 193 may be configured to communicate with the memory interconnect 104 and with the heterogeneous storage system 106 via the memory interconnect 104. It should be appreciated that the above is merely an illustrative example and that the disclosed subject matter is not so limited.

In the illustrated embodiment, the system 100 may include a memory interconnect 104. The memory interconnect 104 may be configured to route data accesses (e.g., data writes, data reads, etc.) by the processors 102 to a target storage medium. In the illustrated embodiment, the target storage media may be included in the heterogeneous storage system 106.

In some embodiments, heterogeneous storage system 106 may include a plurality of different types of storage media. By way of non-limiting example, heterogeneous storage system 106 may include four different storage media (e.g., storage media 116, 126, 136, and 146, etc.), each based on a different storage technology (e.g., DRAM, PRAM, flash memory, magnetic memory, etc.), and having different performance characteristics (e.g., volatile, speed, fast write speed, non-volatile, capacity, limited write cycles, etc.). It should be appreciated that the above is merely an illustrative example and that the disclosed subject matter is not so limited.

In such an embodiment, it may be desirable to store different data in different types of memory. As described above, memory interconnect 104 may be configured to determine which storage medium should store data 194, or which storage medium is storing data 194, and to route data accesses by the processor to the desired storage medium. In various embodiments, memory interconnect 104 may be configured to route data access to a target storage medium or a selected storage medium based at least in part on one or more performance characteristics of various storage media (e.g., storage media 116, 126, 136, and 146, etc.).

For example, one datum 194 that is frequently accessed or considered temporary may be stored in a volatile and fast storage medium (e.g., DRAM storage medium 116), while one datum 194 that is rarely accessed or stored permanently (or semi-permanently) may be stored in a non-volatile storage medium (e.g., HDD storage medium 146). It should be appreciated that the above are merely a few illustrative examples and that the disclosed subject matter is not so limited.

In various embodiments, data 194 may be associated with a particular data type indicator or performance indicator (shown in FIG. 2) that provides hints (hit), address ranges or values, quality of service, or instructions to memory interconnect 104 as to what type of storage media or performance characteristics are important or associated with particular data 194. In various embodiments, each data type may be associated with one or more desired or optimal storage or storage requirements or capabilities, such as, for example, access speed (e.g., read and/or write performance), persistence, storage energy efficiency, access size (access size), and the like.

For example, if data 194 is marked or associated with a data type that indicates that data 194 is temporary data, data 194 may be routed to DRAM storage medium 116. In such embodiments, memory interconnect 104 may determine that the performance characteristics provided by DRAM storage media 116 match well (or perhaps best) with the associated data type. It should be appreciated that the above is merely an illustrative example and that the disclosed subject matter is not so limited.

In various embodiments, memory interconnect 104 may be configured to route data preferentially to one of a variety of storage media, depending on the type of data. In some embodiments, multiple storage media are acceptable for the data. In such embodiments, the memory interconnect 104 may be configured to queue acceptable storage media according to one or more criteria (e.g., access speed, volatility, etc.), and then select a target storage media according to other factors (e.g., available capacity of the storage, available bus bandwidth, number of available write ports, which storage media is already storing data, quality of service and reservations, etc.). It should be appreciated that the above are merely a few illustrative examples and that the disclosed subject matter is not so limited.

In some embodiments, the processor 102 or a piece of software (e.g., an application, operating system, device driver, etc.) executed by the processor 102 may dynamically set the data type. In another embodiment, the data type may be statically set as instructed by the operating system when the software is compiled or created or at runtime. In yet another embodiment, one or more data types may be associated with a particular memory address region or regions. It should be appreciated that the above are merely a few illustrative examples and that the disclosed subject matter is not so limited.

As described in detail below in conjunction with FIG. 2, in various embodiments, the memory interconnect 104 may provide a uniform or common interface or protocol to the processor 102 for accessing the multiple storage media 116, 126, 136, and 146. Further, the memory interconnect 104 may provide respective interfaces to the various storage media 116, 126, 136, and 146 that employ respective protocols used by the plurality of storage media 116, 126, 136, and 146. In such embodiments, the memory interconnect 104 may be configured to transfer data access from the unified access protocol to a storage medium-specific protocol employed by the storage medium used to store the data, and vice versa, for any response to the data access.

In various embodiments, each storage medium (e.g., storage media 116, 126, 136, and 146) may include a media controller (e.g., storage controllers 117, 127, 137, and 147), respectively, configured to interface with memory interconnect 104 via an appropriate protocol. In some embodiments, one or more of storage media 116, 126, 136, and 146 may employ the same or similar protocols. In various embodiments, each storage medium (e.g., storage media 116, 126, 136, and 146) may include a respective storage portion (e.g., memory controllers 118, 128, 138, and 148, respectively) configured to store data.

As described in detail below in conjunction with FIG. 4, in various embodiments, the heterogeneous storage system 106 may include multiple layers of a conventional memory hierarchy. For example, heterogeneous storage system 106 may include both a traditional second tier of memory hierarchy (via DRAM storage media 116) and a traditional third tier of memory hierarchy (via SSD storage media 136 and HDD storage media 146). In such embodiments, the processor 102 may not be responsible for determining which level of the conventional memory hierarchy to access. Instead, the memory interconnect 104 may be configured to determine which level of a conventional memory hierarchy to access.

Fig. 2 is a block diagram of an exemplary embodiment of an apparatus 200 according to the disclosed subject matter. In some embodiments, the apparatus 200 may be or may include a memory interconnect (memory interconnect 104 of fig. 1). In various embodiments, the apparatus 200 may be configured to route data access 290 from the processor to one of a plurality of storage media based at least in part on one or more performance characteristics associated with the respective storage technology of the selected storage medium.

In some embodiments, the apparatus 200 may include a processor I/O interface 202. In such an embodiment, the processor I/O interface 202 may be configured to receive data accesses 290 (not shown in FIG. 2, but represented as being connected by a double-headed arrow off the page) sent by the processor. For example, in various embodiments, the processor I/O interface 202 may be configured to interact with a processor's memory I/O interface (e.g., memory I/O interface 193 of FIG. 1). The processor I/O interface 202 may also be configured to send the results of the data access 290 (e.g., a write acknowledgement, request data 194, etc.) to the processor. In various embodiments, the processor I/O interface 202 may be configured to communicate with the processor through a unified access protocol that allows the processor to access various storage media regardless of the individual protocols that may be used.

In various embodiments, the apparatus 200 may include a plurality of memory interfaces 206 (e.g., memory interfaces 216, 226, 296, etc.). In such an embodiment, each memory interface 206 may be configured to send data access 200 to a corresponding storage medium (not shown in FIG. 2, but represented as being connected by a double-headed arrow off the page). Each memory interface 206 may also be configured to receive the results of a processor's data access 290 (e.g., a write acknowledgement, request data 194, etc.). In various embodiments, each memory interface 206 may be configured to communicate with a particular type of storage media via a storage media specific protocol or a storage media type specific protocol. In some embodiments, multiple storage media may use or employ the same memory interface. For example, a system may include a PRAM and DRAM that utilize the same interface protocol and, therefore, both may be accessed by the universal memory controller 204. It should be appreciated that the above is merely an illustrative example and that the disclosed subject matter is not so limited.

In one embodiment, the apparatus 200 may include a configurable memory controller 204. In such embodiments, the configurable memory controller 204 may be configured such that the data access 290 is dynamically routed between the processor and one of a plurality of storage media. As described above, in various embodiments, the configurable memory controller 204 may make routing decisions based at least in part on one or more performance characteristics associated with each of the respective storage media.

In various embodiments, the apparatus 200 may include a set of performance characteristics 219. In such embodiments, performance characteristics 219 may indicate one or more performance characteristics associated with each respective memory interface 206, and the storage medium is communicatively coupled with memory interface 206 via a proxy. In such embodiments, the performance characteristics 219 may be obtained by scanning or interrogating the storage medium (e.g., at boot-up, at device initialization, in response to a triggering event such as a hot swap indication, etc.). In another embodiment, the performance characteristics 219 may be input into the memory of the apparatus 200 storing the performance characteristics 219 from an external source (e.g., program, internet, device driver, user, etc.).

In some embodiments, the performance characteristics 219 may include information or values that indicate relative accuracy or coarse-grained accuracy (e.g., large design tolerances, minimum performance guarantees, credits, number of memory banks in a memory chip, number of data bus signals to a memory chip, time required to access a column or row of a memory page, time of a memory read or write access, etc.). However, in another embodiment, the performance characteristics 219 may include information or values that indicate fine-grained accuracy (e.g., performance characteristics measured by actual storage devices, tight design tolerances, etc.). In yet another embodiment, the performance characteristics 219 may include various levels or granularities of accuracy. It should be appreciated that the above are merely a few illustrative examples and that the disclosed subject matter is not so limited.

In the illustrated embodiment, memory controller 204 may reference or read performance characteristics 219 and employ performance characteristics 219 (in whole or in part) when determining which storage medium to use for data access 290. As described below with reference to other figures, other factors may be considered relevant when routing data access 290 (e.g., a cache hit, available storage capacity, operating mode such as a low power operating mode, etc.). It should be appreciated that the above are merely a few illustrative examples and that the disclosed subject matter is not so limited.

As described above, in various embodiments, the data access 290 may include a data type indicator 294. In some embodiments, it may take the form of a first message sent prior to a conventional data access message. In one embodiment, the data type indicator 294 may include a message indicating that all future data accesses (before the next data type message) are considered part of a particular data type. In another embodiment, the data type indicator 294 may include a flag, or field in the data access message 290. In yet another embodiment, the data type indicator 294 may be hidden from the data access message 290. For example, data access 290 may be a memory address associated with a particular data type. It should be appreciated that the above are merely a few illustrative examples and that the disclosed subject matter is not so limited.

In various embodiments, the memory of the apparatus 200 may store one or more storage preferences 239. These storage preferences 239 may affect how and where to route the data access 290. Examples of storage preferences 239 may include (but are not limited to): a preference to store data in a low power storage medium; a preference to maximize data throughput, data stability, and/or reliability for a given storage medium; a preference for consumption of the storage medium (e.g., storage technology with a limited number of write cycles) not to exceed a certain level; and the like. These storage preferences 239 (as well as performance characteristics 219 and data type 294, etc.) may be considered when determining routing data access 290.

As described above, in various embodiments, storage controller 204 may be configured to compare data types 294 to memory hierarchy parameters 229 and performance characteristics 219 of various storage media. Memory controller 204 may then attempt to match data 194 to the particular storage medium that gives popular storage preference 239. Data access 290 may then be routed through its associated memory interface 206 to the selected or target storage medium.

In various embodiments, storage preferences 239 and/or performance characteristics 219 may be dynamically updated as the condition of the storage media changes. This may result in performance characteristic 219 being updated, for example, if the storage medium is full or runs out of available storage for storing data 194. In another embodiment, performance characteristic 219 may be updated if the storage medium is subject to data errors, or more generally, exceeds a predetermined threshold for a certain characteristic (e.g., operating temperature, number of errors, number of write cycles for a given block, etc.).

In yet another embodiment, storage preferences 239 may be changed if a triggering event occurs to apparatus 200 or a system that includes apparatus 200 (e.g., a change in power supply, a change in physical location, a change in the network employed by the system, a user-generated instruction, etc.). In some embodiments, there may be multiple sets of storage preferences 239, and the selection of which set of storage preferences 239 to use at a given moment may depend on the system environment or system settings. For example, if the system (and, thus, the apparatus 200) is operating with a substantially unlimited power supply (e.g., electrical power from a wall outlet, etc.), the storage preferences 239 may specify performance preferences that exceed reliability (e.g., speed and tolerances of volatile memory, etc.). Conversely, if the system changes (e.g., electrical power from a wall outlet, etc. is unplugged) and then operates through a limited power supply (e.g., a battery, etc.), a second set of storage preferences 239 specifying preferences for low power consumption and improved reliability in the event of a power failure (e.g., preferences for low power, non-volatile memory, etc.) may be used. Another example of a triggering event that dynamically changes the active storage preferences 239 may be a storage medium that exceeds a threshold (e.g., becomes too hot, etc.), and then the storage preferences 239 may be changed to avoid the hot storage medium and, thus, have an opportunity to cool. It should be appreciated that the above are merely a few illustrative examples and that the disclosed subject matter is not so limited.

In various embodiments, the apparatus 200 may include one or more processors or accelerator processors 208. In such embodiments, the accelerator processors 208 may be dedicated circuitry, blocks of Functional Units (FUBs), and/or Combinational Logic Blocks (CLBs) configured to perform particular tasks of the memory controller 204 as part of the routing operation. In some embodiments, the specific task may include helping to determine to which storage medium the data access 290 should be routed. In another embodiment, the specific task may include converting or translating the data access 290 or a portion thereof (e.g., data 194) between communication protocols or otherwise as part of the routing operation. In some embodiments, the particular task may be Direct Memory Access (DMA)260, such that transfers are direct between any of the storage media 116, 126, 136, 146, etc. It should be appreciated that the above are merely a few illustrative examples and that the disclosed subject matter is not so limited.

In some embodiments, the apparatus 200 may include a protocol translation circuit 256, the protocol translation circuit 256 configured to translate the data access 290 of a first protocol (e.g., a uniform protocol employed by the processor, etc.) to a second protocol (e.g., a storage medium specific protocol, etc.), and vice versa. In some embodiments, protocol translation circuitry 256 may be considered a coprocessor or accelerator processor 208.

In various embodiments, the apparatus 200 may include an encryption circuit 258, the encryption circuit 258 configured to encrypt and/or decrypt at least the data portion 194 of the data access 290. In some embodiments, data 194 may be encrypted as data 194 passes through a bus that couples a storage medium to device 200, or couples a processor to device 200. In various embodiments, a subset of the plurality of storage media may include encrypted data. In some embodiments, the cryptographic circuitry 258 may be considered a coprocessor or accelerator processor 208.

As described below with reference to fig. 3a, in various embodiments, the apparatus 200 may be configured to process a plurality of storage media as a cache or cache hierarchy. Conventional cache hierarchies that are tightly integrated with a processor or processor core (e.g., cache hierarchy 192 of fig. 1) include mechanisms and structures for detecting whether a data is at a cache level (e.g., a Translation Lookaside Buffer (TLB), a memory address flag, etc.) and protocols for managing the contents of the entire cache hierarchy (e.g., cache hit/miss messages, snoop messages, cache directories, fill requests, etc.). However, conventional storage media such as primary storage (e.g., DRAM, etc.) or secondary storage (e.g., HDD, SSD, etc.) lack these structures and communication protocols. In the illustrated embodiment, the apparatus 200 may include structure to perform the same tasks on multiple storage media that have been organized as a hierarchy and operate as a cache hierarchy (external to the processor).

In the illustrated embodiment, the apparatus 200 may include a cache or hierarchy of organizational circuits 252. In various embodiments, the cache or tier organization circuitry 252 may be configured to organize the plurality of storage media into a virtual cache hierarchy or organization (e.g., tier, set, etc.). By way of example, caches are described centrally herein, and the organization of hierarchical groups is discussed with reference to FIGS. 3b and 3 c.

In such embodiments, the cache organization circuitry 252 may be configured to designate the storage medium as a level in a cache hierarchy. In various embodiments, this may be accomplished in accordance with one or more storage type performance characteristics. For example, fast, volatile storage media (e.g., DRAM, etc.) may be at a higher level in the hierarchy, while slower, non-volatile storage media (e.g., HDD, etc.) may be at a lower level in the hierarchy. In some embodiments, the grouping or allocation of layers in a hierarchy may be determined by a set of memory hierarchy parameters 229 or storage preferences 239.

In various embodiments, a problem with currently stored (or to be stored) data 194 may occur as data access 290 is handled by memory controller 204. Since the storage medium may not have the capability to process cached queries (e.g., cache hit requests, snoops, etc.), the apparatus 200 or other device may be responsible for keeping track of where what data 194 is stored. In various embodiments, the apparatus 200 may include a cache look-up table 254, the cache look-up table 254 configured to track where the data 194 or a memory address associated with the data is currently stored.

For example, if data access 290 is a read request, cache look-up table 254 may indicate that data 194 is stored at the highest level of the virtual cache, and memory controller 204 may route data access 290 to a higher-level storage medium (e.g., a storage medium coupled with memory type 1 interface 216, etc.). In another example, cache look-up table 254 may indicate that data 194 is not stored in the highest level of the virtual cache, but is stored in the next highest level, and memory controller 204 may route data access 290 to a storage medium (e.g., a storage medium coupled with memory type 2 interface 226, etc.). It should be appreciated that the above are merely a few illustrative examples and that the disclosed subject matter is not so limited.

In another example, if data access 290 is a write request, cache look-up table 254 may indicate that there is space available for data 194 in the highest level of the virtual cache, and memory controller 204 may route data access 290 to an appropriate storage medium (e.g., a storage medium coupled with memory type 1 interface 216, etc.). In yet another example, cache look-up table 254 may indicate that there is no space available for data 194 in the highest level of the virtual cache, but memory controller 204 may be highly desirable to store data 194 in the highest level of the virtual cache for various reasons (e.g., as determined by data type 294, storage preference 239, etc.). In such an embodiment, memory controller 204 may evict one data out of the highest hierarchical level and move it to a lower hierarchical level (in doing so, update cache look-up table 254), and then store new data 194 in the newly available storage location in the highest hierarchical level of the virtual cache. In such an embodiment, the apparatus 200 may be configured to generate or issue data accesses on its own to perform maintenance on the virtual cache. It should be appreciated that the above are merely a few illustrative examples and that the disclosed subject matter is not so limited.

In various embodiments, the memory controller 204 may update or maintain the cache look-up table 254 each time any data access 290 to the virtual cache hierarchy occurs. In one embodiment, the cache/hierarchy organization circuitry 252 and/or the cache look-up table 254 may be considered a co-processor or accelerator processor 208.

It should be appreciated that the above are merely a few illustrative examples of a coprocessor or accelerator processor 208 and that the disclosed subject matter is not so limited. In various embodiments, other co-processing circuitry 250 may be included in the apparatus 200 and the co-processor or accelerator processor 208.

Fig. 3a is a block diagram of an exemplary embodiment of a system 300 according to the disclosed subject matter. In various embodiments, system 300 may comprise variations or different versions of system 100 shown in FIG. 1.

In the illustrated embodiment, a multi-processor system is shown. In such embodiments, the system 300 may include a second processor 302. In various embodiments, multiple processors may be present in a system (e.g., 4, 6, 8, 16 processors, etc.), although only two are shown for purposes of illustration. Also, a single processor chip or integrated circuit may include multiple CPU cores.

For example, in one embodiment, a server rack may include multiple multiprocessor computing subsystems, blades (blades), sleds (slids), or units. In such an embodiment, any one of the multiprocessor blades may send data accesses to the heterogeneous storage system 106. In some such embodiments, a memory controller or interconnect 304a may be included as part of the accelerator subsystem, blade, sled, or unit, and various computing blades may be coupled to the accelerator blade. In such embodiments, memory interconnect 304a may be configured to aggregate data accesses from multiple computing units (e.g., processors 102 and 302, etc.) and distribute them to multiple storage media that are heterogeneous (e.g., heterogeneous storage system 106, etc.). In some embodiments, memory interconnect 304a may also facilitate some local business operations, such as peer-to-peer communication between two subsystem memory types.

In various embodiments, if multiple processors are included in the system, the system may employ a scheme that may extend the address mapping memory type by using such entries as processor IDs or similar identifiers. It should be appreciated that the above is merely an illustrative example and that the disclosed subject matter is not so limited.

In addition, FIG. 3a also illustrates the ability of the memory interconnect 340a to organize the heterogeneous storage system 106 into a cache hierarchy 305. In the illustrated embodiment, the cache hierarchy 305 may include only a subset of the heterogeneous storage systems 106, although in another embodiment, the entire heterogeneous storage system 106 may be included. Specifically, in the illustrated embodiment, the cache hierarchy 305 may include a first storage medium 116 (e.g., DRAM, etc.) as a highest level of the cache hierarchy 305. The cache hierarchy 305 may include a second storage medium 126 (e.g., PRAM, etc.) as an intermediate level of the cache hierarchy 305. The cache hierarchy 305 may include a third storage medium 136 (e.g., flash memory, etc.) as a lowest level of the cache hierarchy 305, and a fourth storage medium 146 (e.g., HDD, etc.) may remain external to the cache hierarchy 305. It should be appreciated that the above is merely an illustrative example and that the disclosed subject matter is not so limited.

As described above, the cache hierarchy 305 may be organized by the memory interconnect 304a, and more specifically, the memory interconnect 304a includes cache organization circuitry 352 a. In such embodiments, cache organization circuitry 352a may monitor all data accesses to cache hierarchy 305 and instruct memory interconnect 304a where or where to store the data.

For example, processor 102 may request to read data (via data access 392). The memory interconnect 304a may consider the data access to go to the cache hierarchy 305 (e.g., as opposed to the fourth storage medium 146, or as opposed to a particular member of the hierarchy, etc.). In such embodiments, memory interconnect 304a may query cache organization circuitry 352a (or look-up table, as described above) which storage medium includes the desired data. In the illustrated embodiment, the data may be stored in first storage medium 116, and data access 392 may be routed thereto. In another embodiment, the data may already be stored in second storage medium 126 or third storage medium 136, and data access 392 is routed thereto as appropriate. It should be appreciated that the above is merely an illustrative example and that the disclosed subject matter is not so limited.

In another example, processor 102 may request write data (via data access 392). Further, the memory interconnect 304a may consider the data access as being destined for the cache hierarchy 305 (e.g., as opposed to the fourth storage medium 146, or as opposed to a particular member of the hierarchy, etc.). Memory interconnect 304a may query cache organization circuitry 352a (or look-up table, as described above) which storage medium, if any, includes the desired data. In this example, cache organization circuitry 352a may respond that all three levels of cache hierarchy 305 include the data. In such embodiments, memory interconnect 304a may select any hierarchy based on various criteria (e.g., cache hierarchy, data type, performance characteristics, storage preferences, etc.).

In the illustrated embodiment, the data may be stored in first storage medium 116, and data access 392 may be routed thereto. In such embodiments, the cache organization circuitry 352a may mark copies of data stored in the third storage medium 136 and the second storage medium 126 as invalid in its internal tables. In such embodiments, memory interconnect 304a may be configured to perform cache coherency operations with cache hierarchy 305.

In one embodiment, data accesses 394 and 396 illustrate that memory interconnect 304a may be configured to initiate data accesses by itself. In the illustrated embodiment, this may be done for the purpose of maintaining or managing the cache hierarchy 305, although there may be other reasons. Specifically, in one embodiment, after a data write (e.g., data access 392) has been updated or new data has been written to a higher cache level (e.g., storage medium 116), a copy of the data in a lower cache level (e.g., storage media 126 and 136) may be considered invalid or stale.

In various embodiments, memory interconnect 304a may be configured to mirror data stored in higher cache levels in lower cache levels. In one such embodiment, if the higher level of the hierarchical cache storage system 305 that includes the data includes volatile storage media, this may include mirroring the data in a non-volatile layer of the hierarchical cache storage system 305.

In the illustrated embodiment, after writing data to a higher cache tier (e.g., storage medium 116), the memory interconnect may initiate data access 394 to write the data to the next cache tier (e.g., storage medium 126). Further, when this is done, the data may be copied to the next cache tier (e.g., storage medium 136) via data access 396. In such an embodiment, the data may be considered valid or refreshed after being mirrored. Performing such memory-to-memory transfers may be facilitated by a DMA circuit (e.g., DMA circuit 260 shown in fig. 2). In the illustrated embodiment, data accesses 394 and 396 are shown to read data from a higher cache level and write data to a lower cache level. In some embodiments, memory interconnect 304a may include buffers or other temporary storage elements that may store data. In such an embodiment, data accesses 394 and 396 may only include writes from the buffer to the lower cache level. It should be appreciated that the above is merely an illustrative example and that the disclosed subject matter is not so limited.

In various embodiments, the memory interconnect 304a that initiates data access may include reading data, writing data, moving data, modifying data, and/or deleting data. In such embodiments, the memory interconnect 304a may perform maintenance operations on the heterogeneous storage system 106. In another embodiment, memory interconnect 304a may move data up or down in the cache level. For example, in one embodiment, memory interconnect 304a may be configured to move data up the cache hierarchy when data is accessed more frequently, thereby providing faster access. Conversely, in another embodiment, when data is not frequently accessed, the memory interconnect 304a may be configured to move data down the cache hierarchy, thereby increasing the available space to store more frequently accessed data. It should be appreciated that the above are merely a few illustrative examples and that the disclosed subject matter is not so limited.

Fig. 3b is a block diagram of an exemplary embodiment of a system 301 according to the disclosed subject matter. In the illustrated embodiment, system 301 may include a memory interconnect 304 b. Memory interconnect 304b may include a hierarchy of organizational circuits 352 b.

In the illustrated embodiment, system 301 may include a heterogeneous storage system 306. The heterogeneous storage system 306 may be the same as the heterogeneous storage system of fig. 1 and 3a, except for a small amount of difference. For example, the third storage medium 336 may be based on HDD technology instead of the flash or NAND technology of fig. 1 and 3 a. In such embodiments, the multiple storage media (e.g., storage media 336 and 146) may be based on similar or identical technologies (e.g., magnetic storage, etc.). It should be appreciated that the above is merely an illustrative example and that the disclosed subject matter is not so limited.

Further, in the illustrated embodiment, the flash storage media 136 has been moved away from the third storage media point and is now the second storage media. The PRAM type storage medium of fig. 1 and 3a is not present in the system 300 at all. In such an embodiment, heterogeneous memory system 306 includes a DRAM-type storage medium (storage medium 116), a flash/NAND-type storage medium (storage medium 136), and two magnetic storage media (storage media 336 and 146). It should be appreciated that the above are merely a few illustrative examples and that the disclosed subject matter is not so limited.

As described above, the system 300 may organize these different storage/storage types into different levels in a hierarchical manner. In some embodiments, as described above with reference to FIG. 3a, the tiers may be organized into cache tiers, with one or more tiers accessing prior to or in advance of other tiers. In other embodiments, such as the embodiments shown in fig. 3b and 3c, the organization may not be cache based.

In various embodiments, the organization may be performed by hierarchy organization circuitry 352a and is based at least in part on storage hierarchy parameters, performance characteristics, and/or data type needs. It should be appreciated that the above are merely a few illustrative examples and that the disclosed subject matter is not so limited.

In the illustrated embodiment, the hierarchy may be organized by storage or storage techniques. In such embodiments, the first storage tier 312 may include DRAM or the first storage medium 116. The second storage tier 322 may include NAND or second storage media 136. The third storage tier 332 may include magnetic storage media 336 and 146.

In such embodiments, upon receiving data access 380 from processor 102, memory interconnect 304b may determine which memory hierarchy (e.g., hierarchy 312, 322, or 332) will complete or provide data access 380. As described above, this determination may be based on factors such as: the data type of data associated with data access 380; performance characteristics of not only the individual storage media but also the tier itself; and/or a set of storage preferences. In various embodiments, data access 380 may be routed as data access 381, 382, or 383 depending on the storage hierarchy selected to receive data access 380.

In various embodiments, the storage hierarchy may include various complex data structures or storage systems. For example, the third storage tier 332 includes two storage media (e.g., storage media 336 and 146), and may include storage virtualization in the form of a Redundant Array of Independent Disks (RAID). Examples of such RAID organizations may include: a mirrored array (RAID-1), a twisted or striped array (RAID-1), or another form of virtual storage (e.g., a collocated or spinning array, a just a Bunch of disks (JBOD) array, etc.). In various embodiments, other forms of arrays (e.g., RAID-5, etc.) may be employed for other numbers of storage media.

In another embodiment, the storage tier may include multiple types (a mixture) of storage media (e.g., both SSDs and HDDs, etc.) and may (or may not) include a mixed cache architecture that provides performance characteristics of the separate storage media. In such embodiments, the hierarchical or partitioned scheme of the heterogeneous storage system 306 may be combined with a scheme of the cache hierarchy organization of the heterogeneous storage system 306. For example, in various embodiments, the first tier 312 and the third tier 332 may not include a caching scheme (or the memory interconnect 340b is not provided), but the second tier 322 may include the same caching hierarchy as described above with reference to fig. 3 a.

In a particular example, a tier that provides a mix of two or more storage media may be based primarily on magnetic technology storage media(s) (e.g., HDD (s)), but with a smaller flash portion (e.g., a single SSD, etc.) that provides faster access to a small portion of the total data stored by the mixed tier. In such embodiments, two or more distinct storage media may be included in a hierarchy and may be organized as a plurality of hierarchical cache hierarchies. In some embodiments, memory interconnect 304b may manage a caching scheme (e.g., cache hit, cache coherence, etc.), as described above. In other embodiments, a separate memory controller (not shown) may be present to manage such a caching scheme. It should be appreciated that the above are merely a few illustrative examples and that the disclosed subject matter is not so limited.

In various embodiments, a hierarchy or cache hierarchy may include only a portion of a particular storage medium. For example, in one embodiment, one level of the cache hierarchy may include 25% (or other number) of storage media (e.g., storage media 136, etc.), and the remaining cache hierarchy is reserved for non-cache use. In various embodiments, memory interconnect 304b may be configured to dynamically adjust the amount or portion of storage media reserved for a cache or hierarchy. It should be appreciated that the above is merely an illustrative example and that the disclosed subject matter is not so limited.

Fig. 3b and 3c also illustrate the response of the system 301 to a trigger event 370. As described above, memory interconnect 304b may be configured to organize heterogeneous storage system 306 into a hierarchy of storage medium levels (e.g., levels 312, 322, and 332) based at least in part on one or more performance characteristics associated with each type of storage medium.

In the embodiment shown, memory interconnect 304b organizes the hierarchy according to speed. In various embodiments, the tiers 312, 322, and 332 may be given priority because the first tier 312 is the fastest and may be more desirable. The same is true for the second tier 322 and lowest for the third tier 332. However, as shown in FIG. 3b, a triggering event 370 may occur (e.g., the storage media 136 may suddenly exceed an error threshold or a temperature threshold, etc.). As shown in FIG. 3c, upon receipt of the trigger event 370, the memory interconnect 304b may be configured to reorganize the storage media levels of the hierarchy (e.g., levels 312, 322, and 332). In the illustrated embodiment, the hierarchy is reorganized (in relation to fig. 3 b) such that the failed storage medium 136 is now the third memory tier 332 and the two HDD storage media 336 and 146 are now the second memory hierarchy 322. In such an embodiment, the failed storage media 136 may be the lowest preferred storage media and may be avoided whenever possible. In one such embodiment, the failed storage media 136 may be used only to complete read data accesses, and write data accesses may be performed to other tiers (e.g., data may be moved away from the failed storage media and to non-failed storage media, etc. slowly and as transparently as possible to the processor). It should be appreciated that the above is merely an illustrative example and that the disclosed subject matter is not so limited.

It should be appreciated that there may be many ways to reorganize the tiers (e.g., tiers 312, 322, and 332), and there may be many other triggering events 370 that may cause the memory interconnect 304b to perform the reorganization. Although fig. 3c illustrates reorganization in a priority order of tiers (e.g., moving the storage media 136 to the third tier 322, etc.), the storage media included in the various tiers may be reorganized. For example, the second tier 322 may be reformed by adding storage media 336 to the storage media 136. In such embodiments, flash-type storage media 136 may serve as a cache (e.g., providing both speed and storage capacity, etc.) for HDD-type storage media 336. In particular, given other forms or types of storage media (e.g., PRAM, MRAM, etc.), there may be other forms of hierarchy. It should be appreciated that the above are merely a few illustrative examples and that the disclosed subject matter is not so limited.

Fig. 4 is a block diagram of an exemplary embodiment of an apparatus 400 in accordance with the disclosed subject matter. The apparatus 400 may be or may include a memory interconnect (e.g., memory interconnect 104 of fig. 1, etc.) and may be similar to the system 200 of fig. 2. Although system 200 of fig. 2 illustrates an embodiment in which the processor or processors employ a unified access protocol, system 400 illustrates the processor or processors using multiple access protocols.

Traditionally, processors interact with the system or main memory (e.g., DRAM, etc.) and any secondary memory through a portion of a chipset called a "north bridge". The north bridge separates communications to the system memory from communications to the secondary memory. The north bridge communicates directly with the system memory through a first protocol and transfers communication to the secondary memory to another portion of the chipset called the "south bridge". Then, the south bridge communicates with the secondary memory through a second protocol. Finally, the system memory portion of the north bridge is moved to or integrated within its processor (e.g., Memory Chip Controller (MCC), Integrated Memory Controller (IMC), etc.). Typically, the processor communicates with the system memory directly through the first protocol (through the MCC) and completes communication with the secondary memory (e.g., through an I/O controller hub (ICH), Platform Controller Hub (PCH), etc.) on the chipset using the second protocol.

While the embodiment of FIG. 2 communicates with the memory interconnect using a single unified access protocol, current (and legacy) processors use at least two protocols for data access (a first protocol for system memory and a second protocol for secondary memory). Thus, in embodiments where the processor has transitioned from the traditional two protocol convention, a single unified access protocol may be used. In the embodiment shown in fig. 4, the apparatus 400 is configured to use a plurality of protocols employed by conventional processors.

In one embodiment, the apparatus 400 may include a processor system memory interface 402n, the processor system memory interface 402n configured to receive data accesses sent by a processor (not shown) and to a system memory (e.g., DRAM, etc.). The processor I/O interface 402n may also be configured to send the results of data accesses (e.g., write acknowledgements, requested data 194, etc.) to the processor that the processor desires to send to system memory. In various embodiments, processor I/O interface 402n may be configured to communicate with a processor via a first access protocol typically employed by an Integrated Memory Controller (IMC) or similar circuit.

In one embodiment, the apparatus 400 may include a processor secondary memory interface 402s configured to receive data accesses sent by a processor and destined for a secondary memory (e.g., HDD, SSD, etc.). The processor I/O interface 402s may also be configured to send the results of data accesses (e.g., write acknowledgements, requested data 194, etc.) that the processor expects to send to the secondary storage to the processor. In various embodiments, processor I/O interface 402s may be configured to communicate with a processor via a second access protocol typically employed by an I/O controller hub (ICH) or similar circuit.

In various embodiments, the apparatus 400 may include an integrated connection fabric and memory controller 404, the integrated connection fabric and memory controller 404 configured to handle data accesses from the processor system memory interface 402n and the processor secondary storage interface 402 s. In various embodiments, the memory controller 404 (or the coprocessor circuitry 208) may be configured to translate these processor protocols into storage-medium-based protocols and vice versa.

Further, in various embodiments, memory controller 404 may be configured to route data access from a storage medium anticipated by the processor to another storage medium. For example, if a data access is made through the processor system memory interface 402n, the processor expects a data access to occur to system memory (e.g., memory type 1 interface 216, etc.). However, for various reasons, the memory controller 404 may determine that data access is to occur to a different storage medium (e.g., PRAM, NAND, etc.) and may route the data access as it requires. In such embodiments, the memory controller 404 may be configured to hide from the processor or merely to not mention storage medium changes.

In another embodiment, the memory controller 404 may be configured to comply or otherwise comply with storage medium expectations of the processor such that all data accesses occurring through the processor system memory interface 402n may occur to the system memory (e.g., memory type 1 interface 216, etc.). It should be appreciated that the above are merely a few illustrative examples and that the disclosed subject matter is not so limited.

In various embodiments, the apparatus 400 may include different interfaces (e.g., interfaces 402n, 402s, etc.) for different processors. In such embodiments, the multiprocessor system may allow more or even non-congested access to the apparatus 400. In such embodiments, the various processors may employ different communication protocols. It should be appreciated that the above is merely an illustrative example and that the disclosed subject matter is not so limited.

Fig. 5 is a flow chart of an exemplary embodiment of a technique in accordance with the disclosed subject matter. In various embodiments, a system, such as the system of fig. 1, 3a, 3b, 3c, or 9, may use or generate the technique 500. In addition, systems such as the systems of fig. 2 or 4 may use or produce portions of the technique 500. It should be understood, however, that the foregoing is merely a few illustrative examples and that the disclosed subject matter is not so limited. It is to be appreciated that the disclosed subject matter is not limited by the order or number of acts shown in technique 500.

In one embodiment, block 502 illustrates that data access to a heterogeneous storage system may be received, as described above. In one embodiment, data accesses may be received by the memory interconnect from the processor. In various embodiments, a heterogeneous storage system may include multiple types of storage media, as described above. In some embodiments, each type of storage medium may be based on a respective storage technology and associated with one or more performance characteristics, as described above. In various embodiments, the heterogeneous storage system may include a volatile primary system memory storage medium and a non-volatile secondary storage medium, as described above.

In various embodiments, the multiple types of storage media may be based on two or more different storage media, as described above. In some embodiments, the plurality of types of storage media includes storage media based on three or more different storage technologies selected from the group consisting essentially of: dynamic Random Access Memory (DRAM), Resistive Random Access Memory (RRAM), phase change random access memory (PRAM), Magnetic Random Access Memory (MRAM), NAND flash memory, and magnetic memory, as described above.

In one embodiment, receiving may include receiving data access in a unified access protocol, as described above. In another embodiment, receiving data access may include receiving data access to a first set of one or more storage media via a first access protocol; and receiving data access to a second set of one or more storage media via a second access protocol, as described above.

In various embodiments, one or more of the acts illustrated by this block may be performed by the apparatus or system of fig. 1,2, 3a, 3b, 3c, 4, or 9, the memory interconnect of fig. 1,2, 3a, 3b, 3c, or 4, or the processor, as described above.

Block 504 illustrates that, in one embodiment, based on various characteristics, a storage medium of a heterogeneous storage system may be determined to be a target storage medium for data access, as described above. In various embodiments, this determination may be made based at least in part on at least one performance characteristic associated with the target storage medium, as described above. In various embodiments, one or more of the acts indicated by this block may be performed by the apparatus or system of fig. 1,2, 3a, 3b, 3c, 4, or 9, the memory interconnect of fig. 1,2, 3a, 3b, 3c, or 4, as described above.

Block 506 shows that in one embodiment, data access may be routed, at least in part, between the processor and the target storage medium, as described above. In one embodiment, routing may include translating data access from a uniform access protocol to a storage medium specific protocol employed by the target storage medium, as described above. In various embodiments, receiving the data access may include receiving an indicator of a data type associated with the data access. In such an embodiment, routing may include preferentially routing data to one of multiple types of storage media based on the type of data, as described above. In some embodiments, the data type associated with the data may be set when compiling a software program that when executed by a processor results in data access, as described above. In various embodiments, one or more of the acts indicated by this block may be performed by the apparatus or system of fig. 1,2, 3a, 3b, 3c, 4, or 9, the memory interconnect of fig. 1,2, 3a, 3b, 3c, or 4, as described above.

Block 501 illustrates that, in one embodiment, at least a portion of multiple types of storage media may be organized into a hierarchy of storage media levels, as described above. In some embodiments, the organization may be based at least in part on one or more performance characteristics associated with each type of storage media, as described above. In various embodiments, organizing may include hierarchically organizing the hierarchically structured storage media into a hierarchical cache storage system, as described above. In such embodiments, hierarchically organizing the hierarchically structured storage media into a hierarchical cache storage system may include monitoring the data content of each storage media in the hierarchical cache storage system, as described above. In such embodiments, the determination may include which storage medium, if any, includes one of the data associated with the data access, as described above. In such embodiments, routing may include routing the data access to a storage medium included in a highest level of a hierarchical cache storage system that includes one data associated with the data access, as described above. In various embodiments, if the highest level of the hierarchical cache storage system that includes an associated one of the data includes a volatile storage medium, the technique 500 may further include mirroring the one of the data in the non-volatile level of the hierarchical cache storage system, as described above.

In some embodiments, the technique 500 may further include dynamically reorganizing the storage media tiers of the hierarchy in response to a triggering event, as described above. In such embodiments, the triggering event may include at least a partial failure of a damaged storage medium included in the heterogeneous storage system, as described above. In one embodiment, the re-dynamic organization may include reducing the use of damaged storage media, as described above. In various embodiments, one or more of the acts indicated by this block may be performed by the apparatus or system of fig. 1,2, 3a, 3b, 3c, 4, or 9, the memory interconnect of fig. 1,2, 3a, 3b, 3c, or 4, as described above.

Fig. 6a is a block diagram of an exemplary embodiment of a system 600 according to the disclosed subject matter. In the illustrated embodiment, the system 600 may include one or more virtual machines 602 that use the heterogeneous storage system 106, as described above. In such embodiments, memory usage of virtual machine 602 may be routed in heterogeneous storage system 106 to take advantage of various physical characteristics of its storage media.

As described above, in various embodiments, heterogeneous storage system 106 may include a plurality of different storage media (e.g., storage media 116, 126, 136, 146, etc.). In such embodiments, the heterogeneous storage system 106 may include different types of storage media based on various storage technologies. In some embodiments, these techniques may include, but are not limited to, for example: DRAM, phase change ram (pram), NAND or flash memory (e.g., SSD, etc.), resistive ram (rram), magnetoresistive ram (mram), magnetic memory (e.g., HDD, etc.), and the like. It should be appreciated that the above are merely a few illustrative examples and that the disclosed subject matter is not so limited.

Also, in the illustrated embodiment, the system 600 may include one or more physical processors or host processors or Central Processing Units (CPUs) 662 and other hardware and/or software components (e.g., a host Operating System (OS), network controller/interface, chipset, etc.). In such embodiments, virtual machine 602 may be executed using these physical or host hard components 662.

In the illustrated embodiment, system 600 may include one or more Virtual Machines (VMs) 602. While three virtual machines 602, 602a, and 602b are shown, it should be appreciated that the above is merely an illustrative example and that the disclosed subject matter is not so limited. In various embodiments, VM602 comprises an emulation of a computing system. In particular embodiments, VM602 may include an emulation of substantially an entire system platform or device that supports execution of an entire Operating System (OS) and one or more applications. In the parlance of a VM, the real or physical hardware/software that performs or completes the emulation is referred to as the "host," while the emulated or virtual hardware/software is referred to as the "guest.

In various embodiments, virtual machine 602 may include a virtual processor 692, a virtual memory I/O interface 694, and other general-purpose virtual hardware devices (e.g., network interfaces, storage media, etc.) that are emulated. Further, in various embodiments, virtual machine 602 can execute a guest Operating System (OS)696 and one or more applications 698. In various embodiments, VM602 may process data 682. Because a portion of the VMs process data 682, data 682 may be stored in physical memory of system 600 (e.g., heterogeneous storage system 106, etc.) and data 682 is accessed (e.g., read, written, etc.) by data access.

In the illustrated embodiment, the system 600 may include a virtual layer or Memory Management Unit (MMU) 604. In some embodiments, the MMU604 may include the memory interconnect of FIGS. 1,2, 3a, 3b, or 5, as described above. In the illustrated embodiment, the MMU604 may be configured to route data or memory accesses between the VM602 and the heterogeneous memory system 106, or more specifically, the storage media 116, 126, 136, 146 of the heterogeneous memory system 106.

In various embodiments, each VM602 may execute various applications 698, and each of these applications 698 may have different system resource requirements or needs. For example, one application 698 may be a file server or database, and may require fast read/write access to information stored in a substantially non-volatile format. Another application 698 may be a web server and may require fast read access to most data cached in volatile memory. In yet another embodiment, the application 698 may be a batch server or a compilation server (e.g., for executing applets, etc.) and may involve fast read/write access to data stored in virtual memory that is ultimately written to a non-volatile storage medium. It should be appreciated that the above are merely a few illustrative examples and that the disclosed subject matter is not so limited.

However, regardless of what the purpose of the VM602 or application 698 is (e.g., infrastructure as a service (IaaS), software as a service (SaaS), platform as a service (PaaS), etc.) and what the system resource requirements are (e.g., reliable storage, fast read, fast write, transactions per second, etc.), the VM602 may be associated with a particular quality of service (QoS). In various embodiments, QoS may guarantee or set a particular level of performance provided by VM602 or application 698 as required. In some embodiments, these QoS guarantees may be enforced by Service Level Agreements (SLAs). Thus, when someone executes the application 698, it knows that the execution of the application 698 will reach a particular service level.

A guaranteed QoS hierarchy may be desirable or important (for various users) because one of the features of VM602 is that it can be seamlessly moved substantially from one set of physical hardware to another. The respective hardware may be different and, therefore, may have different physical characteristics, but with QoS agreements, a minimum level of quality of performance may be ensured.

In the illustrated embodiment, each VM602 or application 698 may be associated with a QoS flag 670, 670a, or 670b indicating the QoS level desired by the application 698 or VM 602. For example, a QoS flag 670 may indicate that application 698 expects or requires 100 nanoseconds (ns) of memory latency for application 698 to be the shortest memory latency. In such an embodiment, system 600 may be configured to provide application 698 with physical hardware that meets the 100ns minimum latency requirement.

In such embodiments, the MMU604 may be configured to allocate or route data accesses among the various storage media (e.g., media 116, 126, etc.) or storage technologies in accordance with the QoS requirements of the application 698 (as expressed by the associated QoS flag 670). Further, in various embodiments, the MMU604 can be configured to transfer storage data or memory pages between storage media when QoS guarantees are no longer required, or when QoS guarantees can be relaxed (e.g., as determined by a triggering event, etc.).

In the illustrated embodiment, MMU604 (or host CPU 662) may be configured to read QoS flag 670 when transferring VM602 or executing VM602 on system 600. In some embodiments, QoS marker 670 may include a first portion 672 indicating the performance characteristics or criteria of interest for virtual machine guarantees. Depending in part on the sensitivity of the VM602 to a given performance characteristic, the MMU604 may be configured to allocate or route memory accesses from the VM602 to a corresponding storage medium (e.g., storage media 116, 126, etc.) that completes or addresses the performance characteristic of the VM 602. For example, if the VM602 or application 698 is latency sensitive (as determined by QoS), then when it first moves to the host server (e.g., system 600), all new pages or memory allocations are performed within the space allocated for the available fast storage technology (e.g., storage medium 116 or 126, etc.).

In the illustrated embodiment, QoS flag 670 may include a second portion 674 that indicates a value or range of thresholds required by the performance characteristics of VM 602. In addition, MMU604 may read this flag portion 674 and then use this value to determine where to route memory accesses of VM 602. For example, if the second portion 674 represents a latency requirement or threshold of 100ns, then a memory access or page of VM602 or application 698 may be allocated to DRAM (storage medium 116) that meets the 100ns requirement. Similar decisions can be made to maximize overall storage utilization depending on the type of storage technology available.

In one embodiment, the MMU604 may include a memory trait list or database 611 that may associate an actual storage medium, or storage technology (e.g., storage medium 116, 126, etc.) with a particular physical trait (e.g., latency less than 100ns, non-volatility, maximum number of writes, write speed, storage capacity, etc.) that it implements. In some embodiments, the list 611 may be provided when the system 600 is established. In another embodiment, the list may be updated periodically or upon a triggering event (e.g., adding storage media to the storage system 106, etc.).

In some embodiments, values or data from QoS flag 670 (e.g., QoS flag 670a, QoS flag 670b, etc.) may be added to memory characteristics list 611. In such embodiments, QoS requirements of VM602 or application 698 may be mapped to or associated with various respective storage media. In some embodiments, mapping or association may be from one QoS marker 670 to multiple storage mediums. In one such embodiment, the mapping may indicate a hierarchy or level of priority of the storage medium. For example, if QoS flag 670 indicates a latency requirement of 100ns, memory characteristics list 611 may have memory accesses from application 698 primarily associated with DRAM (storage medium 116) (which in turn is associated with QoS flag 670), but secondarily associated with PRAM (storage medium 126). In such embodiments, MMU604 may route memory accesses from application 698 to PRAM126 if access to DRAM116 is for some reason not possible or desirable. It should be appreciated that the above is merely an illustrative example and that the disclosed subject matter is not so limited.

In the illustrated embodiment, the MMU604 may include a memory router 610, the memory router 610 configured to select a target storage medium of the heterogeneous storage system 106 for data access based at least in part on at least one performance characteristic associated with the target storage medium and a QoS flag 670 associated with the virtual machine 602 or the application 698 and indicating one or more performance characteristics guaranteed by the virtual machine 602 or the application 698. In one embodiment, when VM602 makes a memory access (e.g., read, write, etc.) to heterogeneous storage system 106, MMU604 may be responsible for translating the virtual address space employed by VM602 to the real address space employed by system 600. In some embodiments, the real address space may comprise a flat memory space.

In various embodiments, memory router 610 may receive memory accesses (e.g., write data 682, etc.) and note that a memory access is associated with a particular VM602 or application 698 and thus a particular QoS flag 670. In such an embodiment, memory router 610 may then compare the physical characteristic requirement(s) of QoS flag 670 (e.g., latency below 100 ns) to the physical characteristics of the storage medium. In one embodiment, the physical characteristic information may be stored in the memory characteristic list 611.

In such embodiments, memory router 610 may match the memory access to the appropriate storage medium (e.g., storage medium 126, etc.) and route the data access to the target storage medium (as indicated by the thick line in fig. 6 a). As described above, in some embodiments, associations between VM602 and/or applications 698 may be stored in memory characteristics list 611.

As described above, in various embodiments, the selected target storage medium may not be the storage medium that is most favorable for satisfying QoS guarantees. In such embodiments, memory router 610 may take into account additional factors in selecting the target storage medium, such as, for example, the amount of free storage space, bandwidth to the storage medium, congestion to the storage medium, reliability of the storage medium, and so forth. In one such embodiment, the MMU604 may include a free pages list 612, the free pages list 612 configured to maintain a count of the number of allocable storage spaces associated with each storage medium. In such embodiments, if the preferred target storage medium does not have sufficient free space for data access (e.g., because of capacity for data access, because of a quantum system, other thresholds, etc.), memory router 610 may select a less preferred or less preferred (e.g., third, etc.) storage medium as the target storage medium. For example, if DRAM116 is the preferred target storage medium, but is too full (e.g., determined based on a threshold or the like), memory router 610 may select PRAM126 as the target storage medium. It should be appreciated that the above is merely an illustrative example and that the disclosed subject matter is not so limited.

Fig. 6b is a block diagram of an exemplary embodiment of a system 600 according to the disclosed subject matter. In such embodiments, the MMU604 can be configured to distribute data associated with the virtual machine 602 or the application 698 between two or more storage media (e.g., storage media 116 and 126, storage media 126 and 146, etc.). In some embodiments, the two or more storage media may share the same physical address space. In some embodiments, this may be because the heterogeneous storage system 106 includes flat storage space. In another embodiment, this may be because the two or more storage media are part of the same portion of the non-planar storage space of the heterogeneous storage system 106. It should be appreciated that the above are merely a few illustrative examples and that the disclosed subject matter is not so limited.

Further, in some embodiments, the data access may include a read access in which the data 682 has been stored in a storage medium (e.g., storage medium 116, etc.). In such embodiments, memory router 610 may be configured to select a storage medium that already includes request data 682 as the target storage medium, regardless of the physical characteristics of the storage medium and/or QoS flag 670. In various embodiments, if data 682 is then edited or modified and rewritten to heterogeneous storage system 106, MMU604 may determine whether data 682 should be routed, or whether data access should be routed to a storage medium that previously stored data 682, based on the physical characteristics of the storage medium and QoS flags 670. For example, if data 182 is rewritten in whole or in large part, data 682 can be moved from a less preferred storage medium (e.g., RRAM 126, etc.) to a more preferred storage medium (e.g., DRAM116, etc.). Conversely, if the data 682 is part of an unmodified very large file or data set, the MMU604 may elect to have the data 682 with the larger file or data set remain on a less preferred storage medium (e.g., PRAM126, etc.). It should be appreciated that the above are merely a few illustrative examples and that the disclosed subject matter is not so limited. In some embodiments, the MMU604 may choose to actively or dynamically move the data 682 between storage media, as described in connection with FIG. 7.

Fig. 7 is a block diagram of an exemplary embodiment of a system 700 in accordance with the disclosed subject matter. In the illustrated embodiment, the system 700 may include a heterogeneous storage system 106 and one or more Virtual Machines (VMs) 602, as described above. In the illustrated embodiment, system 700 may also include a virtual layer or Memory Management Unit (MMU) 704. In various embodiments, the MMU704 of fig. 7 can include (in whole or in part) the MMU604 of fig. 6a and 6b, and vice versa.

In the illustrated embodiment, the data 682 may already be stored in the heterogeneous storage system 106 (e.g., on the storage media 116, etc.). As described above, in various embodiments, QoS flag 670 may include a second or required portion 674 that indicates a range of values or thresholds for performance characteristics guaranteed by VM602 or application 698. In the illustrated embodiment, the second portion 674 may indicate that the QoS guarantee may be relaxed or reduced. In some embodiments, the second portion 674 may include a list of times or events that may result in relaxing or enhancing QoS guarantees. In this case, these times or events may be referred to as "trigger events". In various embodiments, the second portion 674 may include a list of new or alternative ranges or threshold(s) associated with relaxed or level of QoS guarantees.

For example, in one embodiment, the triggering event may be the non-use of a memory page or block. In such embodiments, the QoS flag 670 may indicate whether a page or portion of memory is not accessed (e.g., read from, write to, read from, or write to, etc.) for a particular period of time (e.g., 10 minutes, 50 memory accesses, etc.) during which the QoS guarantee associated with the page or portion of memory may be relaxed. In such embodiments, MMU704 may be configured to move or transfer data from a first storage medium to a second storage medium.

In such embodiments, MMU704 may include event detector 711, which event detector 711 is configured to detect that a triggering event has occurred (e.g., within a triggering threshold, a page has not been accessed, etc.). In such embodiments, MMU704 may actively move data between storage media after an event occurs.

In some embodiments, MMU704 may include a CPU interface 716, which CPU interface 716 is configured to receive/send memory accesses between MMU704 and either master PUC 662 or virtual machine 602, and this may be how normal memory accesses are initiated using MMU 704. The MMU704 may also include a storage system interface 718, the storage system interface 718 configured to receive/send memory accesses between the MMU704 and the heterogeneous storage system 106. In the illustrated embodiment, MMU704 may be configured to transfer cold data or data associated with relaxing QoS guarantees in a manner that does not use CPU interface 716, or to hide the transfer from CPU 662 or VM602, or to do so without the aid of CPU 662 or VM 602. In such embodiments, the migration need not burden the CPU 662 or VM602 with the task of the migration. It should be appreciated that the above is merely an illustrative example and that the disclosed subject matter is not so limited.

In the illustrated embodiment, if the triggering event is that the data is not accessed (either completely or partially) within a certain period of time, the data stored on the DRAM116 may be transferred or moved to the slower storage medium 126 (as indicated by the bold arrow). In some embodiments, a new storage medium may be selected based on a set of relaxed QoS guarantees. In another embodiment, the new storage medium may not meet the QoS guarantee, but a failure to meet the QoS guarantee may be considered acceptable. In such embodiments, by moving unused or cold data from the desired DRAM storage 116, more space may be made available for storing hot or frequently used data. Thus, because the QoS guarantees can satisfy more frequently used data (as opposed to the situation where the used data must be stored in less desirable PRAMs 126 because DRAMs 116 have no space), the overall performance of system 700 may be improved. It should be appreciated that the above is merely an illustrative example and that the disclosed subject matter is not so limited.

In various embodiments of the above example, if the data stored on PRAM126 is no longer accessed for the second threshold period, a second triggering event may occur and further relaxation of the QoS guarantee may occur for that page or data. In such embodiments, the data may be re-transferred to a third or subsequent storage medium (e.g., storage medium 136 or 146, etc.). It should be appreciated that the above is merely an illustrative example and that the disclosed subject matter is not so limited.

In another example, a triggering event may cause the QoS guarantee to increase. For example, if cold data stored in PRAM126 is frequently accessed, a new trigger event may occur and new hot data may be moved from PRAM126 to DRAM116 (as indicated by the bold arrow). It should be appreciated that the above is merely an illustrative example and that the disclosed subject matter is not so limited.

In various embodiments, examples of triggering events may include, for example: time of day (e.g., QoS guarantees may be relaxed in the evening, etc.), activity level of VM602 or application 698, amount of space in storage medium or system 700, number of VMs 602 executed by system 700, user of application 698 (e.g., a particular user may pay for higher QoS, etc.), and so forth. It should be appreciated that the above are merely a few illustrative examples and that the disclosed subject matter is not so limited.

Fig. 8 is a flow chart of an exemplary embodiment of a technique in accordance with the disclosed subject matter. In various embodiments, the technique 800 may be used or generated by a system of the systems shown in fig. 1,2, 3a, 3b, 3c, or 4. In addition, portions of the technique 800 may be used or produced by a system such as the system shown in fig. 6a, 6b, or 7. It should be understood, however, that the foregoing is merely a few illustrative examples and that the disclosed subject matter is not so limited. It is to be appreciated that the disclosed subject matter is not limited by the order or number of acts illustrated by the technique 800.

Fig. 8 is a flow chart of an exemplary embodiment of a technique in accordance with the disclosed subject matter. In various embodiments, the technique 800 may be used or generated by a system of the systems shown in fig. 1,2, 3a, 3b, 3c, 4, 5, 6a, 6b, 7, or 9. It should be understood, however, that the foregoing is merely a few illustrative examples and that the disclosed subject matter is not so limited. It is to be appreciated that the disclosed subject matter is not limited by the order or number of acts illustrated by the technique 800.

Block 802 shows that in one embodiment, data access to a heterogeneous storage system may be received, as described above. In various embodiments, the data access may be received from a virtual machine executed by the processor, as described above. In such embodiments, the heterogeneous storage system may include multiple types of storage media, each based on a respective storage technology and associated with one or more performance characteristics, as described above.

Block 804 illustrates that, in one embodiment, a target storage medium of a heterogeneous storage system for data access may be determined, as described above. In some embodiments, the determining may be performed by a memory management unit, as described above. In various embodiments, the determination may be based at least in part on at least one performance characteristic associated with the target storage media and a quality of service associated with the virtual machine and indicating one or more performance characteristics guaranteed by the virtual machine, as described above.

In various embodiments, the quality of service flag may comprise at least two parts, as described above. In some embodiments, a first portion of the quality of service flag may indicate a virtual machine guaranteed performance characteristic, and a second portion of the quality of service flag may indicate a range of values of the virtual machine guaranteed performance characteristic, as described above.

In various embodiments, determining the target storage medium may include maintaining a count of the number of allocable storage spaces associated with each storage medium, as described above. In such embodiments, determining may include selecting the target storage medium based at least in part on the amount of allocable storage space and the quality of service associated with each respective storage medium, as described above.

In some embodiments, a virtual machine may be configured to execute multiple applications, as described above. In such embodiments, each application may be associated with a quality of service flag indicating one or more performance characteristics guaranteed by the virtual machine, as described above. In one such embodiment, determining the target storage medium may include determining which application executed is associated with data access, as described above.

Block 806 shows that in one embodiment, routing data access may be routed between the processor and the target storage medium, as described above. In some embodiments, this may be implemented by a memory management unit, as described above.

Block 808 illustrates that, in one embodiment, data associated with the virtual machine may be moved from the first storage medium to the second storage medium in response to a triggering event, as described above. In various embodiments, this may be implemented by a memory management unit, as described above. In one embodiment, the triggering event may include that data has not been accessed for a predetermined period of time, as described above. In another embodiment, the triggering event may include relaxing one or more performance characteristics guaranteed by the virtual machine, as described above.

Fig. 9 is a schematic block diagram of an information handling system 900 that may include a semiconductor device formed in accordance with the principles of the disclosed subject matter.

Referring to FIG. 9, information handling system 900 may include one or more devices constructed in accordance with the principles of the disclosed subject matter. In another embodiment, information handling system 900 may employ or perform one or more techniques in accordance with the principles of the disclosed subject matter.

In various embodiments, information handling system 900 may include computing devices, such as, for example, laptops, desktops, workstations, servers, blade servers, personal digital assistants, smart phones, tablets, and other suitable computers, or virtual machines or virtual computing devices thereof. In various embodiments, information handling system 900 may be used by a user (not shown).

The information processing system 900 according to the disclosed subject matter may further include: a Central Processing Unit (CPU), logic, or processor 910. In some embodiments, processor 910 may include one or more blocks of Functional Units (FUBs) or Combinational Logic Blocks (CLBs) 915. In such embodiments, the combinatorial logic block may comprise: various boolean logic operations (e.g., NAND, NOR, NOT, XOR, etc.), stable logic devices (e.g., flip-flops, latches, etc.), other logic devices, or combinations thereof. These combinational logic operations can be configured in a simple or complex manner to process the input signals to achieve the desired results. It should be appreciated that while several illustrative examples of synchronous combinational logic operations are described, the disclosed subject matter is not so limited and may include asynchronous operations or a mixture thereof. In one embodiment, the combinational logic operation may include a plurality of Complementary Metal Oxide Semiconductor (CMOS) transistors. In various embodiments, these CMOS transistors may be arranged as gates to perform logical operations, but it should be understood that other technologies may be employed and are within the scope of the disclosed subject matter.

Information handling system 900 according to the disclosed subject matter may also include volatile memory 920 (e.g., Random Access Memory (RAM), etc.). The information processing system 900 according to the disclosed subject matter may also include a non-volatile memory 930 (e.g., a hard disk, optical memory, NAND or flash memory, etc.). In some embodiments, volatile memory 920, non-volatile memory 930, or combinations or portions thereof, may be referred to as "storage media". In various embodiments, volatile memory 920 and/or non-volatile memory 930 can be configured to store data in a semi-permanent or substantially permanent manner.

In various embodiments, the information handling system 900 may include one or more network interfaces 940, the network interfaces 940 being configured such that the information handling system 900 is part of and communicates over a communication network. Examples of Wi-Fi protocols can include, but are not limited to: institute of Electrical and Electronics Engineers (IEEE)802.11g, IEEE 802.11n, etc. Examples of cellular protocols may include, but are not limited to: IEEE 802.16m (also known as Advanced Wireless-man (metropolian Area network) Advanced), Long Term Evolution (LTE) Advanced, enhanced data rates for global system for mobile communications evolution (EDGE), evolved high speed packet access (HSPA +), and the like. Examples of wired protocols may include, but are not limited to: IEEE 802.3 (also known as ethernet), fibre channel, power line communication (e.g., HomePlug, IEEE 1901, etc.), etc. It should be appreciated that the above are merely a few illustrative examples and that the disclosed subject matter is not so limited.

The information processing system 900 according to the disclosed subject matter can also include a user interface unit 950 (e.g., a display adapter, a haptic interface, a human interface device, etc.). In various embodiments, the user interface unit 950 may be configured to receive input from a user and to otherwise provide output to the user. Other types of devices may also be utilized to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; input from the user can be received in any manner, including: a sound input, a voice input, or a tactile input.

In various embodiments, the information processing device 900 may include one or more other devices or hardware components 960 (e.g., a display or monitor, a keyboard, a mouse, a camera, a fingerprint reader, a video processor, etc.). It should be appreciated that the above are merely a few illustrative examples and that the disclosed subject matter is not so limited.

Information handling system 900 according to the disclosed subject matter may also include one or more system buses 905. In such embodiments, the system bus 905 may be configured to communicatively couple the processor 910, the volatile memory 920, the non-volatile memory 930, the network interface 940, the user interface unit 950, and the one or more hardware components 960. Data processed by the processor 910 or data inputted from the outside of the nonvolatile memory 930 may be stored in the nonvolatile memory 930 or the volatile memory 920.

In various embodiments, information handling system 900 may include or execute one or more software components 970. In some embodiments, software components 970 may include an Operating System (OS) and/or an application. In some embodiments, the OS may be configured to provide one or more services to applications and to manage or otherwise mediate applications with various hardware components of information handling system 900 (e.g., processor 910, network interface 940, etc.). In such embodiments, information handling system 900 may include one or more native applications, which may be installed locally (e.g., in non-volatile memory 930, etc.) and configured to be executed directly by processor 910 and to interact directly with the OS. In such embodiments, the native application may comprise pre-compiled machine executable code. In some embodiments, the native applications may include a script interpreter (e.g., C shell (Csh), apple script, auto hotkey, etc.) or a virtual execution machine (VM) (e.g., Java virtual machine, Microsoft's common language runtime, etc.) that is configured to translate source or object code into executable code that is then executed by processor 910.

The semiconductor devices described above may be encapsulated using various packaging techniques. For example, a semiconductor device constructed in accordance with the principles of the disclosed subject matter may be sealed using any of the following: package On Package (POP) technology, Ball Grid Array (BGA) technology, Chip Scale Package (CSP) technology, leaded plastic chip carrier (PLCC) technology, plastic dual in-line package (PDIP) technology, Die in wafer Pack (Die in wafer Pack) technology, Chip On Board (COB) technology, ceramic dual in-line package (CERDIP) technology, Plastic Metric Quad Flat Package (PMQFP) technology, plastic Quad Flat Package (PQFP) technology, small outline package (SOIC) technology, Shrink Small Outline Package (SSOP) technology, Thin Small Outline Package (TSOP) technology, Thin Quad Flat Package (TQFP) technology, system-in-package (SIP) technology, multi-chip package (MCP) technology, wafer level manufacturing package (WFP) technology, wafer level processing stack package (WSP) technology, or other technologies known to those skilled in the art.

One or more programmable processors executing a computer program may perform method steps to perform functions by operating on input data and generating output. Method steps can also be performed by, and an apparatus can be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit).

In various embodiments, a computer-readable medium may include instructions that, when executed, cause an apparatus to perform at least a portion of the method steps. In some embodiments, the computer readable medium may be embodied in magnetic media, optical media, other media, or a combination thereof (e.g., CD-ROM, hard drive, read-only memory, flash drive, etc.). In such embodiments, the computer-readable medium may be a tangible and non-transitory article of manufacture.

While the principles of the disclosed subject matter have been described with reference to exemplary embodiments, it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the disclosed principles. Accordingly, it should be understood that the above-described embodiments are not limiting, but merely illustrative. Thus, the scope of the disclosed principles is to be determined by the broadest permissible interpretation of the following claims and their equivalents, and shall not be restricted or limited by the foregoing description. It is, therefore, to be understood that the appended claims are intended to cover all such modifications and changes as fall within the scope of the embodiments.

38页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:存储节点、混合存储器控制器及控制混合存储器组的方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类