Memory management system, memory management method, and information processing apparatus

文档序号:958479 发布日期:2020-10-30 浏览:6次 中文

阅读说明:本技术 存储器管理系统、存储器管理方法以及信息处理设备 (Memory management system, memory management method, and information processing apparatus ) 是由 马蒙·卡齐 于 2018-11-21 设计创作,主要内容包括:提供了一种存储器管理系统,该系统基于虚拟地址高速缓存方案有效地保护高速缓冲存储器中的数据。存储器管理系统包括:高速缓冲存储器,临时存储处理器内核请求对其进行存储器访问的数据;状态存储单元,存储与存储器访问请求同时从处理器内核发送的安全状态;以及存储器管理单元,管理对主存储器的访问。如果当处理器内核请求存储器访问时安全状态已改变,则对命中的高速缓存行执行高速缓存刷新。(A memory management system is provided which efficiently protects data in a cache memory based on a virtual address caching scheme. The memory management system includes: a cache memory temporarily storing data to which the processor core requests a memory access; a state storage unit storing a security state transmitted from the processor core simultaneously with the memory access request; and a memory management unit that manages access to the main memory. If the security state has changed when the processor core requests a memory access, a cache flush is performed on the hit cache line.)

1. A memory management system, comprising:

a cache memory temporarily storing data to which the processor core requests a memory access;

A state storage unit storing a security state transmitted concurrently with a request for the memory access from the processor core; and

and a memory management unit that manages access to the main memory.

2. The memory management system of claim 1,

the state storage unit stores a security state in units of cache lines of the cache memory.

3. The memory management system of claim 1,

the state storage unit includes any one of: a tag memory in the cache memory, a register in the cache memory provided separately from the tag memory, and a memory or register installed outside a cache line body, and the state storage unit stores a security state for each line of the cache memory.

4. The memory management system of claim 1,

the memory management unit stores permission information in each entry of the page table in the translation bypass buffer, the permission information indicating whether access is allowed for each security state, and

The memory management unit determines whether to allow access to the secure state transmitted concurrently with the memory access request from the processor core based on the permission information stored in an entry of a hit request.

5. The memory management system of claim 1,

in response to a request for the memory access from the processor core, the state storage unit writes data read out from the main memory after protection checking by the memory management unit to the cache memory, and stores the security state in association with a corresponding cache line.

6. The memory management system of claim 1,

in the event that the security state when the processor core requests a memory access does not match the security state stored in the state storage unit, performing a cache flush on the cache line that hit the request.

7. The memory management system of claim 1,

in the case where the security state when the processor core requests a memory access does not match a security state stored in the state storage unit, performing a protection check by the memory management unit, and in the case where the request for the memory access is allowed, accessing a cache line that hits the request and updating the security state stored in the state storage unit.

8. The memory management system of claim 1,

in the case where, although the security state when the processor core requests a memory access does not match the security state stored in the state storage unit, the difference between the security states satisfies a predetermined rule within the cache memory, a cache line that hits the request is accessed and the security state stored in the state storage unit is updated.

9. The memory management system of claim 1,

in the event that the secure state when the processor core requests a memory access is higher in authority than the secure state stored in the state storage unit, accessing a cache line that hits the request and updating the secure state stored in the state storage unit.

10. The memory management system of claim 1,

the cache memory adopts a virtual address cache method.

11. A memory management method, comprising:

a step of reading data, to which a memory access is requested by a processor core, from a main memory and temporarily storing the data in a cache memory;

A state storing step of storing a security state transmitted simultaneously with a request for memory access from the processor core; and

a control step of controlling access to the cache memory and the main memory based on a result of comparing the security state at the time when the processor core requests a memory access with a security state stored in a state storage unit.

12. An information processing apparatus comprising:

a processor core;

a main memory;

a cache memory temporarily storing data to which the processor core requests a memory access;

a state storage unit storing a security state transferred concurrently with a request for a memory access from the processor core; and

a memory management unit that manages access to the main memory.

Technical Field

The technology disclosed in this specification relates to a memory management system, a memory management method, and an information processing apparatus that employ a virtual address caching method.

Background

A Memory Management Unit (MMU) is disposed between a processor and a physical memory in a general memory system. The MMU performs such address translation for the entire virtual address space to achieve a virtual address space for each transaction, and also provides virtual memory equal to or greater than the real memory capacity.

In addition, in order to solve the problem of insufficient memory speed in the processor, the memory is subjected to hierarchical processing. Specifically, as the primary cache, a high-speed and small-capacity memory is built in the same chip as the processor. Then, an expensive and high-speed Static Random Access Memory (SRAM) is provided as a secondary cache in the vicinity of the processor. In addition, a main memory including a relatively low-speed and inexpensive dynamic ram (dram) is provided.

Here, as a method of the processor referring to the cache memory, a physical address cache method for performing a search using the converted physical address and a virtual address cache method for performing a search using the virtual address may be cited. In a memory system employing a physical address cache method, an MMU is provided between a processor and a cache memory, and address translation is performed each time the processor accesses the cache. On the other hand, in a memory system adopting the virtual address cache method, the MMU is disposed between the cache memory and the main memory, and the processor refers to the cache memory by using the virtual address. Only in case of a cache miss does the MMU perform address translation and access main memory.

The physical address caching method is mainly used in a memory system having a cache memory. However, the physical address cache method has a problem in that the address translation process is performed in the MMU each time the processor accesses the cache memory, resulting in a reduction in power efficiency and circuit speed.

On the other hand, in the virtual address caching method, address translation and cache memory activation are performed in the MMU only in the event of a cache miss. Therefore, power consumption is reduced. Therefore, the virtual address caching approach is considered promising for ultra low power internet of things (IoT) that requires battery operation for many hours and wearable devices that require low power consumption.

Disclosure of Invention

Problems to be solved by the invention

An object of the technology disclosed in the present specification is to provide a memory management system that effectively protects data in a cache memory that employs a virtual address caching method, a memory management method, and an information processing apparatus.

Solution to the problem

A first aspect of the technology disclosed in this specification is a memory management system comprising:

a cache memory temporarily storing data to which the processor core requests a memory access;

a state storage unit storing a security state transmitted concurrently with a request for the memory access from the processor core; and

and a memory management unit that manages access to the main memory. The cache memory adopts a virtual address cache mode.

Note that the term "system" used herein refers to a logical set of a plurality of devices (or functional modules that realize specific functions), and it does not matter whether each device or functional module is in a single housing.

The state storage unit includes any one of: a tag memory in the cache memory, a register in the cache memory provided separately from the tag memory, and a memory or register installed outside a cache line body, and the state storage unit stores a security state for each line of the cache memory.

The memory management system according to the first aspect is configured such that in the event that the security state of a processor core requesting a memory access does not match the security state stored in the state storage unit, a cache flush is performed on the cache line that hit the request.

Alternatively, the memory management system according to the first aspect is configured to: such that in the event that the security state of the processor core requesting the memory access does not match the security state stored in the state storage unit, a protection check is performed by the memory management unit and, in the event that the memory access request is granted, the access hits the requested cache line and updates the security state stored in the state storage unit.

Alternatively, the memory management system according to the first aspect is configured to: such that in the event that although the security state of the processor core requesting the memory access does not match the security state stored in the state storage unit, the difference between the security states satisfies a predetermined rule within the cache memory, the access hits the requested cache line and updates the security state stored in the state storage unit.

Further, a second aspect of the technology disclosed in the present specification is a memory management method comprising:

a step of reading data, to which a memory access is requested by a processor core, from a main memory and temporarily storing the data in a cache memory;

A state storing step of storing a security state transmitted simultaneously with a memory access request from the processor core; and

and a control step of controlling access to the cache memory and the main memory based on a result of comparison of a security state of the memory access requested by the processor core with a security state stored in the state storage unit.

Further, a third aspect of the technology disclosed in the present specification is an information processing apparatus comprising:

a processor core;

a main memory;

a cache memory temporarily storing data to which the processor core requests a memory access;

a state storage unit storing a security state transmitted simultaneously with a memory access request from the processor core; and

and a memory management unit that manages access to the main memory.

Effects of the invention

According to the technology disclosed in the present specification, a memory management system, a memory management method, and an information processing apparatus can be provided. The memory management system may protect data in a cache memory that employs a virtual address caching method with a small amount of information stored in the cache memory.

Note that the effects described in this specification are merely examples, and the effects of the present invention are not limited thereto. Further, the present invention can exhibit additional effects in addition to the above-described effects.

Other objects, features, and advantages of the technology disclosed in the present specification will be apparent from the embodiments described later and the more detailed description based on the accompanying drawings.

Drawings

Fig. 1 is a diagram schematically showing a configuration example of a system including an embedded device.

Fig. 2 is a diagram showing an example of a hardware configuration of the sensing apparatus 100.

Fig. 3 is a diagram schematically showing a configuration example of the memory management system 1 employing the virtual address cache method.

Fig. 4 is a diagram showing how the security state communicated simultaneously with the memory access request from the processor core 10 is stored in the cache memory 20.

Fig. 5 is a flowchart (first half) showing a procedure for controlling memory access in the memory management system 1.

Fig. 6 is a flowchart (latter half) showing a procedure for controlling memory access in the memory management system 1.

Fig. 7 is a diagram showing an implementation example of the cache memory 20, the cache memory 20 being configured to store the security state of each cache line in tag bits.

Fig. 8 is a flowchart (first half) showing the deformability of a process for controlling memory accesses in the memory management system 1.

Fig. 9 is a flowchart (latter half) showing a modification of a procedure for controlling memory access in the memory management system 1.

Fig. 10 is a flowchart (first half) showing another modification of a procedure for controlling memory access in the memory management system 1.

Fig. 11 is a flowchart (latter half) showing another modification of the procedure for controlling memory access in the memory management system 1.

FIG. 12 is a diagram showing how permission information is copied from the MMU 30 to the cache 20.

Detailed Description

Hereinafter, embodiments of the technology disclosed in the present specification will be described in detail with reference to the accompanying drawings.

There are cases where permission information is set in each piece of data to be processed by the processor. The permission information includes, for example, security status or protection attributes, such as the user allowed access or the allowed processing (read, write, execute, etc.) in this case, it is necessary to perform a permission check to protect the memory each time the processor accesses the memory.

For example, the MMU includes a Translation Lookaside Buffer (TLB) that stores information for translating virtual addresses into physical addresses in units of pages, and may hold, as a page attribute, permission information about a corresponding page of each entry in the TLB. Thus, in the physical address caching method, the permission check can be easily performed each time the processor core accesses the cache memory.

On the other hand, in the virtual address cache method, the MMU is provided at the subsequent stage of the cache memory (as described above). For this reason, the MMU cannot perform permission checking when the processor core accesses the cache memory. Therefore, another approach (i.e., an MMU independent approach) is needed to protect the memory.

For example, a processing system has been proposed which stores a page attribute of line data for each cache line when data is cached in a cache memory via an MMU as a result of a cache miss (for example, see patent document 1). According to the processing system, permission check can be performed at the time of cache hit based on information in the cache memory, so that memory protection can be easily achieved in the virtual address cache method.

However, in this processing system, the capacity for holding a copy of the license information for each line in the cache memory needs to be separately set in the cache memory. Generally, the size of a line of a cache is smaller than the size of a page that is a unit of address translation. Therefore, the copied license information has redundancy. For example, assume that the cache line is 16 bytes, the page size is 16 kilobytes, and the total capacity of the cache memory is 16 kilobytes. In the case of such a memory configuration, even if all the data in the cache memory is data corresponding to the same page, 1024 copies of the same permission information need to be held in the cache memory, resulting in redundancy.

Furthermore, it is contemplated that the processing system requires sideband signals and control circuitry for sending the permission information from the MMU to the cache memory. Typically, there is only unidirectional communication from the cache to the MMU. Therefore, providing such a sideband circuit and a control circuit increases the circuit cost for information communication from the MMU to the cache memory.

In addition, a memory management system has been proposed which is configured such that when the access right is changed, the contents of the cache memory are refreshed, a cache miss occurs at the next access, and data needs to be obtained from the physical memory (for example, see patent document 2). According to this memory management system, after the access right is changed, permission check can be appropriately performed via the MMU so that data can be obtained from the main memory. However, in such a memory management system, the entire data in the cache memory needs to be refreshed once with an external signal to synchronize the data in the cache memory with the data in the main memory.

Therefore, proposed in the present specification below is a memory management system capable of realizing protection of data in a cache memory that employs a virtual address caching method with a small amount of information stored in the cache memory.

Fig. 1 schematically shows a configuration example of a system including an embedded device to which the memory management system disclosed in this specification can be applied. The system shown includes a sensing device 100, a base station 200, and a server 202. The sensing device 100 corresponds to an embedded device. Server 202 is installed on cloud 201. The sensing device 100 may be wirelessly connected to the base station 200 and access the server 202 via the cloud 201.

The sensing device 100 includes a Central Processing Unit (CPU)101, an MMU 102, memory such as static Random Access Memory (RAM) (SRAM)103 and flash memory 104, a sensor 105, and a communication module 106. The sensing device 100 is a device driven by a battery 107. Note that a cache memory (L1 cache or L2 cache) adopting the virtual address cache method is provided between the CPU 101 and the MMU 102, but is omitted in fig. 1 to simplify the drawing. Note that battery 107 may be a rechargeable lithium ion battery or a non-rechargeable battery.

For example, the sensing device 100 is used by being worn by a wearer. The CPU 101 analyzes the behavior (stop, walk, run, etc.) of the wearer based on the detection signal of the sensor 105. Then, the analysis result is wirelessly transmitted from the communication module 106 to the base station 200, and recorded on the server 202 via the cloud 201. The server 202 uses the data received from the sensing device 100 to view the wearer, etc.

Fig. 2 shows a hardware configuration example of the sensing apparatus 100 as an example of an embedded apparatus.

The CPU 101 is connected to the system bus 110 via the MMU 102. In addition, devices such as the SRAM 103, the flash memory 104, the sensor 105, and the communication module 106 are connected to the system bus 110.

The flash memory 104 stores an application program for estimating the behavior of the wearer based on, for example, signals of the sensors 105, a library to be used when executing the application program, and data such as a behavior estimation dictionary for estimating the behavior of the wearer. In addition, the sensors 105 include one or more sensor devices such as acceleration sensors, barometric pressure sensors, gyroscopes, Global Positioning Systems (GPS), time-of-flight (TOF) image distance sensors, and light detection and ranging (LIDAR) sensors.

These devices connected to the system bus 110 are placed in the physical address space on which address translation is performed by the MMU 102. The SRAM 103 is disposed in the physical address space. Further, the flash memory 104 is disposed in the physical address space such that the contents of the flash memory 104 are directly visible from the CPU 101 or MMU 102. Further, the communication module 106 and I/O ports of various sensor devices included in the sensor 105 are disposed in a physical address space.

The sensing device 100 requires many hours of operation with the battery 107. Therefore, it is necessary to reduce power consumption. Therefore, to improve power efficiency, a virtual address caching method is applied in which address translation is performed in the MMU only in the event of a cache miss.

Fig. 3 schematically shows a configuration example of the memory management system 1 employing the virtual address caching method, which is applied to the sensing device 100. The memory management system 1 shown includes a processor core 10, a cache memory 20, a Memory Management Unit (MMU)30, and a main memory 40 as a physical memory. First, a memory access operation in the memory management system 1 will be briefly described.

The cache memory 20 employs a virtual address caching method such that the processor 10 accesses the cache memory 20 by using a virtual address. Note that the cache memory 20 may be an L1 cache or an L2 cache.

In the event of a cache hit occurring for a virtual address requested by the processor core 10, the cache memory 20 performs a read or write operation on the corresponding cache line. Further, in the event of a cache miss occurring for a virtual address requested by the processor core 10, the cache memory 20 issues a memory request to the MMU 30 using the virtual address. Note that details of the flow of memory access control in relation to the cache memory 20 and the MMU 30 will be described later.

The MMU 30 includes a Translation Lookaside Buffer (TLB)31 and a page walk mechanism (page walk mechanism) 32. The TLB 31 is used to hold information for translating a virtual address into a physical address in units of pages. Further, the page walk mechanism 32 has a function of referring to the page table 41 located in the main memory. The page table 41 includes a correspondence between a virtual address and a physical address described in units of pages.

In the case where an entry corresponding to the virtual address requested by the processor core 10 is found in the TLB 31 (however, in the case of a cache miss) (i.e., in the case where a TLB hit occurs), the MMU 30 converts the virtual address into a physical address using the information of the entry, and accesses the main memory 40 by using the converted physical address.

On the other hand, in the event that an entry corresponding to the virtual address requested by the processor core 10 is not found in the TLB 31 (i.e., in the event of a TLB miss), the page walk mechanism 32 searches the page table 41 and finds information about the physical address of the page corresponding to the requested virtual address, so that the page walk mechanism 32 creates a new entry in the TLB 31 for mapping the virtual address and the physical address of the requested access. Thereafter, the MMU 30 may again perform address translation processing to translate the requested virtual address to a physical address.

Next, data protection in the memory management system 1 adopting the virtual address cache method will be considered.

In a cache memory adopting the physical address cache method, permission information on a corresponding page is held as a page attribute of each entry in the TLB. This enables the MMU to perform permission checking when the processor core accesses the cache memory, so that address translation is only performed if the access is allowed. As a result, data can be protected at the time of address conversion. On the other hand, in the virtual address caching method, the processor core can directly access the cache memory without involving the MMU. Thus, when the processor core accesses the cache memory, the permission check cannot be performed by the MMU. Therefore, there is a need to protect memory (as described above) by MMU-independent methods.

For example, the following method may be cited as such method. When a cache miss occurs and data is cached in the cache memory 20 via the MMU 30, as shown in fig. 12, the permission information in the corresponding TLB entry stored in the TLB 31 as a page attribute is copied to each line in the cache memory 20.

In fig. 12, when the processor core 10 requests a memory access, the security state of the process is transferred to the cache memory 20 at the same time as the requested virtual address. In the example shown, eight combinations of a plurality of security-related parameters are represented as 3-bit information. Examples of parameters include the type of process (whether the process is a "developer" or "proprietary" process), the mode of the process (whether the process is performed in "user" or "privileged" mode), and the processes allowed ("read", "write", etc.).

On the other hand, the MMU 30 maintains permission information on the corresponding page of each entry in the TLB 31. Specifically, the license information indicates memory access rights (i.e., whether to allow access or protect the memory) for each of the above-described eight security states having 1 bit (i.e., 8 bits in total) by using the eight flags a to H. In each entry in the TLB 31, permission information corresponding to the security state of the corresponding physical page and information (T) for converting the corresponding virtual address into a physical address are held in the form of eight flags a to H indicating whether or not memory access is permitted. For example, flag a indicates with 1 or 0 whether memory access to the corresponding page is allowed for the security state (developer, user, read). Similarly, flag B indicates whether memory access to the corresponding page is allowed for the security state (developer, user, write), and flag C indicates whether memory access to the corresponding page is allowed for the security state (developer, privilege, read). Then, in the example shown in fig. 12, 8-bit permission information (flags a to H) about data in the corresponding page is copied to each cache line of the cache memory 20.

When the processor core 10 performing the processing requests a memory access, the cache memory 20 may determine whether to allow the memory access or protect the memory with reference to a flag corresponding to the security status transmitted from the processor core 10 at the same time as the memory access request among the 8-bit permission information copied from the MMU 30. For example, when a process executing in the secure state "A" requests a memory access, the MMU 30 may reference the flag "A" in the permission information stored in the TLB entry corresponding to the requested virtual address to determine whether to allow the access.

In the example shown in fig. 12, each entry in the TLB 31 of the MMU 30 holds, as page attributes, 8-bit permission information (flags a to H) about data in a corresponding page and information (T) for converting a virtual address into a physical address in units of pages. For example, assume that the cache line is 16 bytes, the page size is 16 kilobytes, and the total capacity of the cache memory 20 is 16 kilobytes. With this memory configuration, even if all the data in the cache memory 20 is data corresponding to the same page, the cache memory 20 needs to have a memory capacity of 1024 × 8 bits to copy the same license information, resulting in redundancy.

In short, as shown in fig. 12, the method of protecting the memory by copying the license information in the MMU 30 to the cache memory 20 has a problem in that the capacity for holding the copy of the license information for each line in the cache memory 20 needs to be separately set in the cache memory 20. Further, the line size of the cache memory 20 is generally smaller than the size of a page as a unit of address conversion. Therefore, the copied information has redundancy. Furthermore, a (reverse) sideband signal and control circuitry for sending permission information from the MMU 30 to the cache 20 need to be provided.

Therefore, the present embodiment achieves data protection in a cache memory employing the virtual address cache method by storing a security state transferred simultaneously with a memory access request from the processor core 10 in units of cache lines in the cache memory 20, instead of using a method of copying permission information defined in units of pages to the cache memory 20 (see fig. 12).

Fig. 4 shows how the security state transferred simultaneously with the memory access request from the processor core 10 is stored in the cache memory 20 in units of cache lines.

When a memory access is requested by the processor core 10, the security state of the process is sent to the cache memory 20 at the same time as the requested virtual address. In the example shown, eight combinations of a plurality of security-related parameters are represented as 3-bit information. Examples of parameters include the type of process (whether the process is a "developer" or "proprietary" process), the mode of the process (whether the process is performed in "user" or "privileged" mode), and the processes allowed (read "," write ", etc.). Cache memory 20 stores 3 bits of information about the security state associated with the cache line that has been requested to be accessed. Note that the processor core 10 and the cache memory 20 are connected by a 3-bit sideband signal for transferring the security status, in addition to the address bus for issuing the memory access request.

On the other hand, the MMU 30 maintains permission information on the corresponding page of each entry in the TLB 31. In the example shown in fig. 4, the license information indicates the memory access authority (i.e., whether to allow access or protect the memory) for each of the above-described eight security states having 1 bit (i.e., 8 bits in total) by using eight flags a to H. In each entry in the TLB 31, permission information corresponding to the security state of the corresponding physical page and information (T) for converting the corresponding virtual address into a physical address are held in the form of eight flags a to H indicating whether or not access is permitted.

For example, flag a indicates with 1 or 0 whether memory access to the corresponding page is allowed for the security status (developer, user, read). Similarly, flag B indicates whether memory access to the corresponding page is allowed for the security state (developer, user, write), and flag C indicates whether memory access to the corresponding page is allowed for the security state (developer, privilege, read).

When a processor core 10 performing processing requests a memory access, the processor core 10 first accesses the cache memory 20. In the case where data referenced based on the requested virtual address is cached in a cache line (cache hit), the security state stored in association with the cache line on which the cache hit occurs is compared with the security state of the processing transferred from the processor core 10 at the same time as the memory access request. Thus, the processor core 10 may directly access data of the cached memory in the cache memory 20 without involving the MMU 30. The permission reference function in the TLB 31 is used via the MMU 30 only in the event of a cache miss.

Although the license information of each page is represented by 8 bits (as described above), the security status is represented by 3 bits. Thus, it can be appreciated that saving the security state of each cache line, rather than the permission information, significantly reduces memory capacity. In addition, the security status is sent to the cache memory 20, as well as normal memory access requests from the processor core 10. Therefore, there is no need for an inverse sideband signal and a control circuit for saving the safe state in the cache memory 20.

In the above example, the license information of each security state is represented by 1 bit. For example, "1" indicates an allowed security state, and "0" indicates a denied security state. As a modification, the license information of each security state may be represented by 2 bits or more. As a result of allocating more bits, detailed system operation can be defined according to the level of unauthorized access. For example, as a result of using 2 bits for the permission information on the security state "a", the detailed system operation as shown in table 1 below may be defined.

[ Table 1]

Note that as a method for storing the security state of each cache line in the cache memory 20, for example, a method in which a tag region associated with each cache line is extended, a method in which a register or a memory is installed separately from a tag, and a method in which a register or a memory that holds the security state of each cache line is installed outside the cache memory 20 can be cited.

Fig. 5 and 6 each show, in the form of a flow chart, a procedure for controlling memory access in the memory management system 1 employing the virtual address caching method. Note that the cache memory 20 is configured such that the security state is stored in the cache memory 20 in units of cache lines. Furthermore, in the flow diagrams shown in fig. 5 and 6, the processing steps to be performed by the MMU30 are shown in gray while other processing steps are performed in the cache memory 20.

The process begins in response to a memory access request issued by the processor core 10.

First, the cache memory 20 is searched to check whether a cache line corresponding to the virtual address requested by the processor core 10 exists (i.e., whether a cache hit occurs) (step S501).

In the case where a cache hit occurs (yes in step S501), it is further checked whether the security state transmitted simultaneously with the memory access request is the same as the security state stored in the cache line that hit the request (step S502).

Then, if the security state is not changed (yes in step S502), a read process or a write process is performed on the cache line in accordance with the memory access request (step S503), and the process ends.

Thus, as long as the security state of the processing of the processor core 10 has not changed, access to the data stored in the cache memory 20 can continue without permission checking by the MMU 30.

On the other hand, when there is a change in the security state (no in step S502), the processing proceeds as follows. When the behavior dirty on which a cache hit has occurred (i.e., when the data of the cache line has been updated) (yes in step S504), the data is written back to the main memory 40 in the secure state stored in the cache line (step S505). In other words, when there is a change in the security state in which the processor core 10 requests a memory access, a cache flush is performed regardless of whether the data has been updated.

Further, in the case where a cache miss occurs on the virtual address requested by the processor core 10 (no in step S501), it is subsequently checked whether the cache memory 20 is full and needs replacement (step S506). In the case where replacement is required (yes in step S506), the data to be discarded (i.e., the victim cache line) is determined according to a predetermined replacement algorithm. Then, when the victim line is "dirty" (i.e., when the data has been updated) (yes in step S507), the data of the victim line is written back to the main memory 40 in the secure state stored by the victim line (step S508).

Then, when a cache miss occurs on the virtual address requested by the processor core 10 (no in step S501), or in the case where the security state of the processing of the processor core 10 has changed (no in step S502), the MMU 30 converts the virtual address into a physical address, and in addition, checks whether or not the memory access request from the processor core 10 is permitted with reference to the permission information on the corresponding entry in the TLB 31 (step S509).

The techniques disclosed in this specification are similar to conventional techniques in that the mechanism in which the MMU 30 references the TLB 31 to perform address translation and the mechanism in which the page walk mechanism 32 searches the page table 41 in the main memory 40 for information of the physical address of the page corresponding to the requested virtual address and creates a new TLB entry when a TLB miss occurs. Therefore, detailed description is omitted here.

Further, in the processing step S509, the MMU 30 may determine whether to permit the memory access or protect the memory with reference to a flag corresponding to the security status transmitted from the processor core 10 at the same time as the memory access request among the 8-bit permission information stored in the TLB entry corresponding to the requested virtual address (as described above).

Here, in the case where the MMU 30 allows the memory access request from the processor core 10 (yes in step S509), the MMU 30 reads data from the corresponding physical address in the main memory 40 (step S510). Then, the read data is written to a free line or victim line in the cache memory 20 (step S511). Further, the tag information of the cache line is updated, and further, the security state of the data written in the cache line is stored in the tag (step S512), and the process ends.

Further, in the case where the MMU 30 does not allow the memory access request from the processor core 10 (no in step S509), the MMU 30 returns a protection error to the processor core 10 that is the access request source (step S513), and the process ends.

According to the memory access process shown in fig. 5 and 6, data in the cache memory 20 employing the virtual address cache method can be protected by storing a small amount of information (i.e., storing a security state) in the cache memory 20.

Further, according to the memory access procedures shown in fig. 5 and 6, as long as the security state of the processing performed in the processor core 10 is not changed, the data stored in the cache memory 20 upon occurrence of a cache miss can be continuously used without permission check of the MMU 30. As a result, power efficiency and circuit speed are improved in the memory management system 1. Further, when there is a change in the security state of the process to be executed in the processor core 10, the cache memory 20 detects the change and refreshes the oldest data. Then, the MMU 30 again performs the processing to be performed on the cache miss. At this time, permission check is also performed.

A method of expanding the tag area associated with each cache line may be cited as a method for storing the security state of each cache line in the cache memory 20.

Fig. 7 shows an implementation example of the cache memory 20, the cache memory 20 being configured to store the security state of each cache line in the tag bit. Note that although a cache memory having a unidirectional configuration is shown for simplicity of the drawing, a bidirectional configuration or another multi-way configuration may be similarly employed.

The cache memory 20 is shown to include a data array 71 and a tag array 72. Data array 71 includes a set of cache lines. The tag array 72 includes tag memories corresponding to respective cache lines.

In the example shown, the data array 71 comprises a data RAM in which 64 rows of 0 to 63 constitute a single memory bank. Again, four words constitute a single row. A single word corresponds to 32 bits. Thus, a single row corresponds to 128 bits.

The tag array 72 includes a tag RAM including a total of 64 tag memories 0 to 63 corresponding to respective rows of the data array 71. A single tag includes tag bits having a length of 22 bits.

Each row of the data array 71 is assigned a data RAM address. Also, each tag of the tag array 72 is assigned a tag RAM address. There is a correspondence between the data RAM address and the tag RAM address.

Each tag includes a valid bit and a dirty bit (dirty bit). The valid bit indicates whether the corresponding cache line is valid or invalid. The dirty bit indicates whether the data on the cache line has been updated. In this embodiment, the tag also assigns 3 bits to the security bit to indicate the security status.

By properly defining the security bits and the permission information, the security of the data in the cache memory can be achieved with the necessary granularity (see, e.g., FIG. 4). Furthermore, even in a processor core having only a simple security function, an advanced security model can be realized by combining an operating system and software.

Further, as other methods for storing the security state of each cache line in the cache memory 20, for example, a method in which a register or a memory is installed separately from a tag, and a method in which a register or a memory that holds the security state of each cache line is installed outside the cache line (both not shown above) may be cited.

Note that bit compression may be performed when storing the security state of each cache line. In the above example, 3 bits are allocated for the secure state. However, in the case where only four types of values are used in actual operation, the values may be compressed to 2 bits and stored. Such compression/decompression processing may be implemented by using one or both of hardware and software.

In the above-described processes shown in fig. 5 and 6, in the case where the security state of the data for which the processor core 10 requests a memory access does not match the security state stored in the cache memory 20, the corresponding cache line is flushed.

Instead, a modification may be applied in which, even in the case where the security state of the memory access request from the processor core 10 does not match the security state stored in the cache memory 20, the corresponding cache line is not immediately flushed, but the MMU30 is requested to perform only the protection check. According to this modification, if access is allowed as a result of the protection check of MMU30, only the security state stored in cache memory 20 needs to be updated, and cache flushing may be omitted.

Fig. 8 and 9 each show a modification of a procedure for controlling memory access in the memory management system 1 in the form of a flowchart. In the illustrated process, even in the event of a security state mismatch, if access is allowed as a result of a protection check of MMU30, only the security state stored in cache 20 need be updated and cache flushing may be omitted.

Note that the cache memory 20 is configured such that the security state is stored in the cache memory 20 in units of cache lines. Furthermore, in the flowcharts shown in fig. 8 and 9, the processing steps to be performed by the MMU30 are shown in gray, and the other processing steps are performed in the cache memory 20.

The process begins in response to a memory access request issued by the processor core 10.

First, the cache memory 20 is searched to check whether a cache line corresponding to the virtual address requested by the processor core 10 exists (i.e., whether a cache hit occurs) (step S801). Then, in the case where a cache hit occurs (yes in step S801), it is further checked whether or not the security state transmitted simultaneously with the memory access request is the same as the security state stored in the cache line of the hit request (step S802). Then, if the security state is not changed (yes in step S802), a read process or a write process is performed on the cache line in accordance with the memory access request (step S803), and the process ends.

On the other hand, when there is a change in the security state (no in step S802), the MMU30 converts the virtual address into a physical address, and in addition, checks whether or not the memory access request from the processor core 10 is permitted with reference to the permission information about the corresponding entry in the TLB 31 (step S814).

In the case where the MMU30 allows the memory access request from the processor core 10 (yes in step S814), the read processing or the write processing is performed on the cache line (step S815). After that, the security state of the data written to the cache line is stored in the tag (step S816), and the process ends. In other words, when there is a change in the security state of a memory access requested by the processor core 10, if the access is allowed as a result of the protection check of the MMU30, the security state stored in the tag is simply updated and the cache flush is omitted.

Further, in the case where the MMU30 does not allow the memory access request from the processor core 10 (no in step S814), the processing proceeds as follows. When the cache line on which a cache hit has occurred is "dirty" (i.e., when the data of the cache line has been updated) ("yes" in step S804), the data is written back to the main memory 40 in the secure state stored by the cache line (step S805).

Further, in the case where a cache miss occurs on the virtual address requested by the processor core 10 (no in step S801), it is subsequently checked whether the cache memory 20 is full and needs replacement (step S806). In the case where replacement is required (yes in step S806), the data to be discarded (i.e., the victim cache line) is determined according to a predetermined replacement algorithm. Then, when the victim line is "dirty" (i.e., when the data has been updated) (yes in step S807), the data of the victim line is written back to the main memory 40 in the secure state stored by the victim line (step S808).

Then, when a cache miss occurs on the virtual address requested by the processor core 10 (no in step S801), or in the case where the security state of the processing of the processor core 10 has changed (no in step S802), the MMU 30 converts the virtual address into a physical address, and in addition, checks whether or not the memory access request from the processor core 10 is permitted with reference to the permission information on the corresponding entry in the TLB 31 (step S809).

In process step S809, the MMU 30 may determine whether to allow the memory access or protect the memory with reference to a flag corresponding to the security status transmitted from the processor core 10 concurrently with the memory access request among the 8-bit permission information stored in the TLB entry corresponding to the requested virtual address (as described above).

Here, in the case where the MMU 30 allows the memory access request from the processor core 10 (yes in step S809), the MMU 30 reads data from the corresponding physical address in the main memory 40 (step S810). Then, the read data is written to a free line or victim line in the cache memory 20 (step S811). Further, the tag information step S812) of the cache line is updated, and further, the security state of the data written to the cache line is stored in the tag (step S816), and the process ends.

Further, in the case where the MMU 30 does not permit the memory access request from the processor core 10 (no in step S809), the MMU 30 returns a protection error to the processor core 10 that is the access request source (step S813), and the process ends.

According to the processes shown in fig. 8 and 9, even in the case where the security state of the memory access request from the processor core 10 does not match the security state stored in the cache memory 20, if access is permitted as a result of the protection check of the MMU 30, only the security state stored in the cache memory 20 needs to be updated, and write-back of data to the main memory 40 can be omitted.

Further, in the case where a predetermined rule regarding permission exists, a modification in which determination regarding permission is made in the cache memory 20 according to the predetermined rule in the case where the security state of the memory access request from the processor core 10 does not match the security state stored in the cache memory 20 may also be applied. According to this modification, the corresponding cache line is not flushed immediately and, in addition, the MMU30 need not be requested to perform only protection checks.

Further, in the case where a predetermined rule regarding the permission exists, it is also possible to perform memory access control in which determination regarding the permission is made in the cache memory 20 in accordance with the predetermined rule. Even in the case where the security state of the memory access request from the processor core 10 does not match the security state stored in the cache memory 20, determination as to permission is made in the cache memory 20 according to a predetermined rule. Therefore, the MMU30 need not perform protection checks. Of course, the corresponding cache line is not immediately flushed.

For example, assume that the following predetermined rules exist in the cache memory 20: if write processing in the secure state has been allowed in the permission check performed by MMU30, processing "write" is also allowed for the secure state replaced by processing "read".

In particular, it is assumed that there are predefined rules that if the security state (developer, user, write) has been allowed by the MMU 30, then the security state (developer, user, read) is also allowed. In this case, while the security state (developer, user, write) is transferred from the processor core 10 at the same time as the memory access request, the security state (developer, user, read) is stored in the cache line that hits the memory access request. Therefore, the security states do not match. However, the cache line that hits the memory access request is accessed and the security state stored in the tag is simply updated without permission checking or cache flushing by MMU 30. The MMU 30 need not perform a protection check and the corresponding cache line is not flushed immediately.

Alternatively, the predetermined rule may be a rule that allows a memory access request issued in a secure state with higher authority. For another example, "exclusive" is a security state that is higher in authority than "developer" and "privileged" is a security state that is higher in authority than "user". Therefore, even in the case where the security state transferred from the processor core 10 at the same time as the memory access request does not match the security state stored in the cache memory 20, if the authority of the security state is higher, the cache line that hits the memory access request is accessed, and the security state stored in the tag is simply updated. The MMU 30 need not perform protection checks. Of course, the corresponding cache line is not refreshed immediately.

Note that the function of controlling access to the cache memory 20 based on such predetermined rules may be realized by hardware or software, or by a combination of hardware and software.

Fig. 10 and 11 each show a modification of a procedure for controlling memory access in the memory management system 1 in the form of a flowchart. In the illustrated procedure, in the case where the security states do not match, access to the cache memory 20 is controlled in accordance with a predetermined rule existing in the cache memory 20 in the changed security state. In the case where access to the cache memory 20 is permitted, the security state stored in the cache memory 20 only needs to be updated, and cache refresh can be omitted.

Note that the cache memory 20 is configured such that the security state is stored in the cache memory 20 in units of cache lines. Furthermore, in the flowcharts shown in fig. 10 and 11, the processing steps to be performed by the MMU 30 are shown in gray, while the other processing steps are performed in the cache memory 20.

The process begins in response to a memory access request issued by the processor core 10.

First, the cache memory 20 is searched to check whether a cache line corresponding to the virtual address requested by the processor core 10 exists (i.e., whether a cache hit occurs) (step S1001). In the case where a cache hit occurs (yes in step S1001), it is further checked whether the security state transmitted simultaneously with the memory access request is the same as the security state stored in the cache line of the hit request (step S1002). Then, if the security status has not changed (yes in step S1002), a read process or a write process is performed on the cache line in accordance with the memory access request (step S1003), and the process ends.

On the other hand, when there is a change in the security status (no in step S1002), it is checked whether the changed security status satisfies a predetermined rule existing in the cache memory 20 (step S1014). For example, it is checked whether the security state of a memory access requested by the processor core 10 is higher in authority than the security state stored in the cache line of the hit request in the cache memory 20.

In a case where the change of the security state satisfies the predetermined rule (yes in step S1014), the read processing or the write processing is performed on the cache line (step S1015). Thereafter, the security status of the data written to the cache line is stored in the tag (step S1016), and the process ends. In other words, when there is a change in the security state of a memory access requested by the processor core 10, if it is determined that the access is allowed according to a predetermined rule present in the cache memory 20, the security state stored in the tag is simply updated, and the cache flush is omitted.

Further, when the MMU 30 does not allow the memory access request from the processor core 10 (no in step S1014), the processing proceeds as follows. When the cache line on which a cache hit has occurred is "dirty" (i.e., when the data of the cache line has been updated) (yes in step S1004), the data is written back to the main memory 40 in the secure state stored by the cache line (step S1005).

Further, in the case where a cache miss occurs on the virtual address requested by the processor core 10 (no in step S1001), it is subsequently checked whether the cache memory 20 is full and needs replacement (step S1006). In the case where replacement is required (yes in step S1006), data to be discarded (i.e., a victim cache line) is determined according to a predetermined replacement algorithm. Then, when the victim line is "dirty" (i.e., when the data has been updated) ("yes" in step S1007), the data of the victim line is written back to the main memory 40 in the secure state stored by the victim line (step S1008).

Then, when a cache miss occurs on the virtual address requested by the processor core 10 (no in step S1001), or in the case where the security state of the processing of the processor core 10 has changed (no in step S1002), the MMU 30 converts the virtual address into a physical address, and in addition, checks whether or not the memory access request from the processor core 10 is permitted with reference to the permission information of the corresponding entry in the TLB 31 (step S1009).

In processing step S1009, the MMU 30 may determine whether to allow the memory access or protect the memory with reference to a flag corresponding to the security status transmitted from the processor core 10 concurrently with the memory access request among the 8-bit permission information stored in the TLB entry corresponding to the requested virtual address (as described above).

Here, in the case where the MMU 30 allows a memory access request from the processor core 10 (yes in step S1009), the MMU 30 reads data from the corresponding physical address in the main memory 40 (step S1010). Then, the read data is written to a free line or victim line in the cache memory 20 (step S1011). Further, the tag information of the cache line is updated step S1012), and further, the security state of the data written to the cache line is stored in the tag (step S1016), and the process ends.

Further, in the case where the MMU 30 does not allow the memory access request from the processor core 10 (no in step S1009), the MMU 30 returns a protection error to the processor core 10 that is the access request source (step S1013), and the process ends.

According to the process shown in fig. 10 and 11. Even in the case where the security state of the memory access request from the processor core 10 does not match the security state stored in the cache memory 20, if the changed security state is allowed according to the predetermined rule existing in the cache memory 20, only the security state stored in the cache memory 20 needs to be updated, and the data write-back to the main memory 40 can be omitted.

Note that, in addition to the above-described procedures (fig. 5 and 6, fig. 8 and 9, and fig. 10 and 11), the memory management system 1 may be configured such that when the security state of the processor core 10 changes, the change is automatically detected in the cache memory 20 adopting the virtual address method, and the cache memory refresh is omitted by software.

The memory management system 1 according to the present embodiment can reduce the amount of information to be stored in the cache memory 20, thereby protecting data in the cache memory 20. Thus, expensive memory resources (flip-flops or SRAMs) for the tag memory can be reduced.

The techniques disclosed in this specification can be readily implemented simply by changing the design of the cache lines. Therefore, there is no need to add a sideband signal (for copying the license information) to the bus connecting the cache memory 20 and the MMU 30, and there is no need to change the design of the MMU 30.

Therefore, the technique disclosed in the present specification has the effect of reducing memory resources and control circuits for protecting data in a cache memory employing the virtual address caching method and improving power efficiency. Accordingly, the techniques disclosed in this specification may be suitably applied to ultra-low power IoT and wearable devices.

INDUSTRIAL APPLICABILITY

The technology disclosed in the present specification has been described above in detail with reference to specific embodiments. However, it is apparent that those skilled in the art can make modifications and substitutions to the embodiments without departing from the gist of the technology disclosed in the present specification.

The memory management technique disclosed in the present specification can be applied to, for example, an embedded device including only a small capacity memory, so that data in a cache memory employing a virtual address cache method can be protected with a small amount of information stored in the cache memory. Of course, the memory management technique disclosed in the present specification can be applied to various types of information processing apparatuses equipped with a general or large-capacity memory and employing a virtual address cache method.

In short, the technology disclosed in the present specification has been described by way of example, and the content described in the present specification should not be construed restrictively. In order to determine the gist of the technology disclosed in the present specification, the claims should be considered.

Note that the technique disclosed in this specification can also adopt the following configuration.

(1) A memory management system, comprising:

A cache memory temporarily storing data to which the processor core requests a memory access;

a state storage unit storing a security state transmitted concurrently with a request for the memory access from the processor core; and

and a memory management unit that manages access to the main memory.

(2) The memory management system according to the above (1), wherein,

the state storage unit stores a security state in units of cache lines of the cache memory.

(3) The memory management system according to the above (1) or (2), wherein,

the state storage unit includes any one of: a tag memory in the cache memory, a register in the cache memory provided separately from the tag memory, and a memory or register installed outside a cache line body, and the state storage unit stores a security state for each line of the cache memory.

(4) The memory management system according to any one of the above (1) to (3), wherein,

the memory management unit stores permission information in each entry of the page table in the translation bypass buffer, the permission information indicating whether access is allowed for each security state, and

The memory management unit determines whether to allow access to the secure state transmitted concurrently with the memory access request from the processor core based on the permission information stored in an entry of a hit request.

(5) The memory management system according to any one of the above (1) to (4), wherein,

in response to a request for the memory access from the processor core, the state storage unit writes data read out from the main memory after protection checking by the memory management unit to the cache memory, and stores the security state in association with a corresponding cache line.

(6) The memory management system according to any one of the above (1) to (5),

in the event that the security state when the processor core requests a memory access does not match the security state stored in the state storage unit, performing a cache flush on the cache line that hit the request.

(7) The memory management system according to any one of the above (1) to (5),

in the case where the security state when the processor core requests a memory access does not match a security state stored in the state storage unit, performing a protection check by the memory management unit, and in the case where the request for the memory access is allowed, accessing a cache line that hits the request and updating the security state stored in the state storage unit.

(8) The memory management system according to any one of the above (1) to (5),

in the case where, although the security state when the processor core requests a memory access does not match the security state stored in the state storage unit, the difference between the security states satisfies a predetermined rule within the cache memory, a cache line that hits the request is accessed and the security state stored in the state storage unit is updated.

(9) The memory management system according to any one of the above (1) to (5),

in the event that the secure state when the processor core requests a memory access is higher in authority than the secure state stored in the state storage unit, accessing a cache line that hits the request and updating the secure state stored in the state storage unit.

(10) The memory management system according to any one of the above (1) to (9), wherein,

the cache memory adopts a virtual address cache method.

(11) A memory management method, comprising:

a step of reading data, to which a memory access is requested by a processor core, from a main memory and temporarily storing the data in a cache memory;

A state storing step of storing a security state transmitted simultaneously with a request for memory access from the processor core; and

a control step of controlling access to the cache memory and the main memory based on a result of comparing the security state at the time when the processor core requests a memory access with a security state stored in a state storage unit.

(11-1) the memory management method according to the above (11), wherein,

in the case where the secure state of the memory access requested by the processor core does not match the secure state stored in the state storage unit, a cache flush is performed in the control step for the cache line that hit the request.

(11-2) the memory management method according to the above (11), wherein,

in the case where the processor core requests that the security state of the memory access does not match the security state stored in the state storage unit, the memory management unit performs a protection check, and in the case where the memory access request is permitted, accesses a cache line that hits the request and updates the security state stored in the state storage unit in the control step.

(11-3) the memory management method according to the above (11), wherein,

in the event that the secure state of the processor core requesting the memory access does not match the secure state stored in the state storage unit, but the difference between the secure states satisfies a predetermined rule within the cache memory, the access hits the requested cache line and updates the secure state stored in the state storage unit.

(12) An information processing apparatus comprising:

a processor core;

a main memory;

a cache memory temporarily storing data to which the processor core requests a memory access;

a state storage unit storing a security state transmitted simultaneously with a memory access request from the processor core; and

and a memory management unit that manages access to the main memory.

List of reference signs

1 memory management system

10 processor core

20 high speed buffer storage

30 MMU

31 TLB

32-page traversing mechanism

40 Main memory

41 Page Table

100 sensing device

101 CPU

102 MMU

103 SRAM

104 flash memory

105 sensor

106 communication module

107 cell

110 bus

200 base station

201 cloud

202 server.

31页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:远程I/O系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类