Memory management system, memory management method, and information processing apparatus
阅读说明:本技术 存储器管理系统、存储器管理方法以及信息处理设备 (Memory management system, memory management method, and information processing apparatus ) 是由 马蒙·卡齐 于 2018-11-21 设计创作,主要内容包括:提供了一种存储器管理系统,该系统基于虚拟地址高速缓存方案有效地保护高速缓冲存储器中的数据。存储器管理系统包括:高速缓冲存储器,临时存储处理器内核请求对其进行存储器访问的数据;状态存储单元,存储与存储器访问请求同时从处理器内核发送的安全状态;以及存储器管理单元,管理对主存储器的访问。如果当处理器内核请求存储器访问时安全状态已改变,则对命中的高速缓存行执行高速缓存刷新。(A memory management system is provided which efficiently protects data in a cache memory based on a virtual address caching scheme. The memory management system includes: a cache memory temporarily storing data to which the processor core requests a memory access; a state storage unit storing a security state transmitted from the processor core simultaneously with the memory access request; and a memory management unit that manages access to the main memory. If the security state has changed when the processor core requests a memory access, a cache flush is performed on the hit cache line.)
1. A memory management system, comprising:
a cache memory temporarily storing data to which the processor core requests a memory access;
A state storage unit storing a security state transmitted concurrently with a request for the memory access from the processor core; and
and a memory management unit that manages access to the main memory.
2. The memory management system of claim 1,
the state storage unit stores a security state in units of cache lines of the cache memory.
3. The memory management system of claim 1,
the state storage unit includes any one of: a tag memory in the cache memory, a register in the cache memory provided separately from the tag memory, and a memory or register installed outside a cache line body, and the state storage unit stores a security state for each line of the cache memory.
4. The memory management system of claim 1,
the memory management unit stores permission information in each entry of the page table in the translation bypass buffer, the permission information indicating whether access is allowed for each security state, and
The memory management unit determines whether to allow access to the secure state transmitted concurrently with the memory access request from the processor core based on the permission information stored in an entry of a hit request.
5. The memory management system of claim 1,
in response to a request for the memory access from the processor core, the state storage unit writes data read out from the main memory after protection checking by the memory management unit to the cache memory, and stores the security state in association with a corresponding cache line.
6. The memory management system of claim 1,
in the event that the security state when the processor core requests a memory access does not match the security state stored in the state storage unit, performing a cache flush on the cache line that hit the request.
7. The memory management system of claim 1,
in the case where the security state when the processor core requests a memory access does not match a security state stored in the state storage unit, performing a protection check by the memory management unit, and in the case where the request for the memory access is allowed, accessing a cache line that hits the request and updating the security state stored in the state storage unit.
8. The memory management system of claim 1,
in the case where, although the security state when the processor core requests a memory access does not match the security state stored in the state storage unit, the difference between the security states satisfies a predetermined rule within the cache memory, a cache line that hits the request is accessed and the security state stored in the state storage unit is updated.
9. The memory management system of claim 1,
in the event that the secure state when the processor core requests a memory access is higher in authority than the secure state stored in the state storage unit, accessing a cache line that hits the request and updating the secure state stored in the state storage unit.
10. The memory management system of claim 1,
the cache memory adopts a virtual address cache method.
11. A memory management method, comprising:
a step of reading data, to which a memory access is requested by a processor core, from a main memory and temporarily storing the data in a cache memory;
A state storing step of storing a security state transmitted simultaneously with a request for memory access from the processor core; and
a control step of controlling access to the cache memory and the main memory based on a result of comparing the security state at the time when the processor core requests a memory access with a security state stored in a state storage unit.
12. An information processing apparatus comprising:
a processor core;
a main memory;
a cache memory temporarily storing data to which the processor core requests a memory access;
a state storage unit storing a security state transferred concurrently with a request for a memory access from the processor core; and
a memory management unit that manages access to the main memory.
Technical Field
The technology disclosed in this specification relates to a memory management system, a memory management method, and an information processing apparatus that employ a virtual address caching method.
Background
A Memory Management Unit (MMU) is disposed between a processor and a physical memory in a general memory system. The MMU performs such address translation for the entire virtual address space to achieve a virtual address space for each transaction, and also provides virtual memory equal to or greater than the real memory capacity.
In addition, in order to solve the problem of insufficient memory speed in the processor, the memory is subjected to hierarchical processing. Specifically, as the primary cache, a high-speed and small-capacity memory is built in the same chip as the processor. Then, an expensive and high-speed Static Random Access Memory (SRAM) is provided as a secondary cache in the vicinity of the processor. In addition, a main memory including a relatively low-speed and inexpensive dynamic ram (dram) is provided.
Here, as a method of the processor referring to the cache memory, a physical address cache method for performing a search using the converted physical address and a virtual address cache method for performing a search using the virtual address may be cited. In a memory system employing a physical address cache method, an MMU is provided between a processor and a cache memory, and address translation is performed each time the processor accesses the cache. On the other hand, in a memory system adopting the virtual address cache method, the MMU is disposed between the cache memory and the main memory, and the processor refers to the cache memory by using the virtual address. Only in case of a cache miss does the MMU perform address translation and access main memory.
The physical address caching method is mainly used in a memory system having a cache memory. However, the physical address cache method has a problem in that the address translation process is performed in the MMU each time the processor accesses the cache memory, resulting in a reduction in power efficiency and circuit speed.
On the other hand, in the virtual address caching method, address translation and cache memory activation are performed in the MMU only in the event of a cache miss. Therefore, power consumption is reduced. Therefore, the virtual address caching approach is considered promising for ultra low power internet of things (IoT) that requires battery operation for many hours and wearable devices that require low power consumption.
Disclosure of Invention
Problems to be solved by the invention
An object of the technology disclosed in the present specification is to provide a memory management system that effectively protects data in a cache memory that employs a virtual address caching method, a memory management method, and an information processing apparatus.
Solution to the problem
A first aspect of the technology disclosed in this specification is a memory management system comprising:
a cache memory temporarily storing data to which the processor core requests a memory access;
a state storage unit storing a security state transmitted concurrently with a request for the memory access from the processor core; and
and a memory management unit that manages access to the main memory. The cache memory adopts a virtual address cache mode.
Note that the term "system" used herein refers to a logical set of a plurality of devices (or functional modules that realize specific functions), and it does not matter whether each device or functional module is in a single housing.
The state storage unit includes any one of: a tag memory in the cache memory, a register in the cache memory provided separately from the tag memory, and a memory or register installed outside a cache line body, and the state storage unit stores a security state for each line of the cache memory.
The memory management system according to the first aspect is configured such that in the event that the security state of a processor core requesting a memory access does not match the security state stored in the state storage unit, a cache flush is performed on the cache line that hit the request.
Alternatively, the memory management system according to the first aspect is configured to: such that in the event that the security state of the processor core requesting the memory access does not match the security state stored in the state storage unit, a protection check is performed by the memory management unit and, in the event that the memory access request is granted, the access hits the requested cache line and updates the security state stored in the state storage unit.
Alternatively, the memory management system according to the first aspect is configured to: such that in the event that although the security state of the processor core requesting the memory access does not match the security state stored in the state storage unit, the difference between the security states satisfies a predetermined rule within the cache memory, the access hits the requested cache line and updates the security state stored in the state storage unit.
Further, a second aspect of the technology disclosed in the present specification is a memory management method comprising:
a step of reading data, to which a memory access is requested by a processor core, from a main memory and temporarily storing the data in a cache memory;
A state storing step of storing a security state transmitted simultaneously with a memory access request from the processor core; and
and a control step of controlling access to the cache memory and the main memory based on a result of comparison of a security state of the memory access requested by the processor core with a security state stored in the state storage unit.
Further, a third aspect of the technology disclosed in the present specification is an information processing apparatus comprising:
a processor core;
a main memory;
a cache memory temporarily storing data to which the processor core requests a memory access;
a state storage unit storing a security state transmitted simultaneously with a memory access request from the processor core; and
and a memory management unit that manages access to the main memory.
Effects of the invention
According to the technology disclosed in the present specification, a memory management system, a memory management method, and an information processing apparatus can be provided. The memory management system may protect data in a cache memory that employs a virtual address caching method with a small amount of information stored in the cache memory.
Note that the effects described in this specification are merely examples, and the effects of the present invention are not limited thereto. Further, the present invention can exhibit additional effects in addition to the above-described effects.
Other objects, features, and advantages of the technology disclosed in the present specification will be apparent from the embodiments described later and the more detailed description based on the accompanying drawings.
Drawings
Fig. 1 is a diagram schematically showing a configuration example of a system including an embedded device.
Fig. 2 is a diagram showing an example of a hardware configuration of the sensing apparatus 100.
Fig. 3 is a diagram schematically showing a configuration example of the memory management system 1 employing the virtual address cache method.
Fig. 4 is a diagram showing how the security state communicated simultaneously with the memory access request from the
Fig. 5 is a flowchart (first half) showing a procedure for controlling memory access in the memory management system 1.
Fig. 6 is a flowchart (latter half) showing a procedure for controlling memory access in the memory management system 1.
Fig. 7 is a diagram showing an implementation example of the
Fig. 8 is a flowchart (first half) showing the deformability of a process for controlling memory accesses in the memory management system 1.
Fig. 9 is a flowchart (latter half) showing a modification of a procedure for controlling memory access in the memory management system 1.
Fig. 10 is a flowchart (first half) showing another modification of a procedure for controlling memory access in the memory management system 1.
Fig. 11 is a flowchart (latter half) showing another modification of the procedure for controlling memory access in the memory management system 1.
FIG. 12 is a diagram showing how permission information is copied from the
Detailed Description
Hereinafter, embodiments of the technology disclosed in the present specification will be described in detail with reference to the accompanying drawings.
There are cases where permission information is set in each piece of data to be processed by the processor. The permission information includes, for example, security status or protection attributes, such as the user allowed access or the allowed processing (read, write, execute, etc.) in this case, it is necessary to perform a permission check to protect the memory each time the processor accesses the memory.
For example, the MMU includes a Translation Lookaside Buffer (TLB) that stores information for translating virtual addresses into physical addresses in units of pages, and may hold, as a page attribute, permission information about a corresponding page of each entry in the TLB. Thus, in the physical address caching method, the permission check can be easily performed each time the processor core accesses the cache memory.
On the other hand, in the virtual address cache method, the MMU is provided at the subsequent stage of the cache memory (as described above). For this reason, the MMU cannot perform permission checking when the processor core accesses the cache memory. Therefore, another approach (i.e., an MMU independent approach) is needed to protect the memory.
For example, a processing system has been proposed which stores a page attribute of line data for each cache line when data is cached in a cache memory via an MMU as a result of a cache miss (for example, see patent document 1). According to the processing system, permission check can be performed at the time of cache hit based on information in the cache memory, so that memory protection can be easily achieved in the virtual address cache method.
However, in this processing system, the capacity for holding a copy of the license information for each line in the cache memory needs to be separately set in the cache memory. Generally, the size of a line of a cache is smaller than the size of a page that is a unit of address translation. Therefore, the copied license information has redundancy. For example, assume that the cache line is 16 bytes, the page size is 16 kilobytes, and the total capacity of the cache memory is 16 kilobytes. In the case of such a memory configuration, even if all the data in the cache memory is data corresponding to the same page, 1024 copies of the same permission information need to be held in the cache memory, resulting in redundancy.
Furthermore, it is contemplated that the processing system requires sideband signals and control circuitry for sending the permission information from the MMU to the cache memory. Typically, there is only unidirectional communication from the cache to the MMU. Therefore, providing such a sideband circuit and a control circuit increases the circuit cost for information communication from the MMU to the cache memory.
In addition, a memory management system has been proposed which is configured such that when the access right is changed, the contents of the cache memory are refreshed, a cache miss occurs at the next access, and data needs to be obtained from the physical memory (for example, see patent document 2). According to this memory management system, after the access right is changed, permission check can be appropriately performed via the MMU so that data can be obtained from the main memory. However, in such a memory management system, the entire data in the cache memory needs to be refreshed once with an external signal to synchronize the data in the cache memory with the data in the main memory.
Therefore, proposed in the present specification below is a memory management system capable of realizing protection of data in a cache memory that employs a virtual address caching method with a small amount of information stored in the cache memory.
Fig. 1 schematically shows a configuration example of a system including an embedded device to which the memory management system disclosed in this specification can be applied. The system shown includes a sensing device 100, a base station 200, and a server 202. The sensing device 100 corresponds to an embedded device. Server 202 is installed on cloud 201. The sensing device 100 may be wirelessly connected to the base station 200 and access the server 202 via the cloud 201.
The sensing device 100 includes a Central Processing Unit (CPU)101, an
For example, the sensing device 100 is used by being worn by a wearer. The CPU 101 analyzes the behavior (stop, walk, run, etc.) of the wearer based on the detection signal of the sensor 105. Then, the analysis result is wirelessly transmitted from the communication module 106 to the base station 200, and recorded on the server 202 via the cloud 201. The server 202 uses the data received from the sensing device 100 to view the wearer, etc.
Fig. 2 shows a hardware configuration example of the sensing apparatus 100 as an example of an embedded apparatus.
The CPU 101 is connected to the system bus 110 via the
The flash memory 104 stores an application program for estimating the behavior of the wearer based on, for example, signals of the sensors 105, a library to be used when executing the application program, and data such as a behavior estimation dictionary for estimating the behavior of the wearer. In addition, the sensors 105 include one or more sensor devices such as acceleration sensors, barometric pressure sensors, gyroscopes, Global Positioning Systems (GPS), time-of-flight (TOF) image distance sensors, and light detection and ranging (LIDAR) sensors.
These devices connected to the system bus 110 are placed in the physical address space on which address translation is performed by the
The sensing device 100 requires many hours of operation with the
Fig. 3 schematically shows a configuration example of the memory management system 1 employing the virtual address caching method, which is applied to the sensing device 100. The memory management system 1 shown includes a
The
In the event of a cache hit occurring for a virtual address requested by the
The
In the case where an entry corresponding to the virtual address requested by the
On the other hand, in the event that an entry corresponding to the virtual address requested by the
Next, data protection in the memory management system 1 adopting the virtual address cache method will be considered.
In a cache memory adopting the physical address cache method, permission information on a corresponding page is held as a page attribute of each entry in the TLB. This enables the MMU to perform permission checking when the processor core accesses the cache memory, so that address translation is only performed if the access is allowed. As a result, data can be protected at the time of address conversion. On the other hand, in the virtual address caching method, the processor core can directly access the cache memory without involving the MMU. Thus, when the processor core accesses the cache memory, the permission check cannot be performed by the MMU. Therefore, there is a need to protect memory (as described above) by MMU-independent methods.
For example, the following method may be cited as such method. When a cache miss occurs and data is cached in the
In fig. 12, when the
On the other hand, the
When the
In the example shown in fig. 12, each entry in the
In short, as shown in fig. 12, the method of protecting the memory by copying the license information in the
Therefore, the present embodiment achieves data protection in a cache memory employing the virtual address cache method by storing a security state transferred simultaneously with a memory access request from the
Fig. 4 shows how the security state transferred simultaneously with the memory access request from the
When a memory access is requested by the
On the other hand, the
For example, flag a indicates with 1 or 0 whether memory access to the corresponding page is allowed for the security status (developer, user, read). Similarly, flag B indicates whether memory access to the corresponding page is allowed for the security state (developer, user, write), and flag C indicates whether memory access to the corresponding page is allowed for the security state (developer, privilege, read).
When a
Although the license information of each page is represented by 8 bits (as described above), the security status is represented by 3 bits. Thus, it can be appreciated that saving the security state of each cache line, rather than the permission information, significantly reduces memory capacity. In addition, the security status is sent to the
In the above example, the license information of each security state is represented by 1 bit. For example, "1" indicates an allowed security state, and "0" indicates a denied security state. As a modification, the license information of each security state may be represented by 2 bits or more. As a result of allocating more bits, detailed system operation can be defined according to the level of unauthorized access. For example, as a result of using 2 bits for the permission information on the security state "a", the detailed system operation as shown in table 1 below may be defined.
[ Table 1]
Note that as a method for storing the security state of each cache line in the
Fig. 5 and 6 each show, in the form of a flow chart, a procedure for controlling memory access in the memory management system 1 employing the virtual address caching method. Note that the
The process begins in response to a memory access request issued by the
First, the
In the case where a cache hit occurs (yes in step S501), it is further checked whether the security state transmitted simultaneously with the memory access request is the same as the security state stored in the cache line that hit the request (step S502).
Then, if the security state is not changed (yes in step S502), a read process or a write process is performed on the cache line in accordance with the memory access request (step S503), and the process ends.
Thus, as long as the security state of the processing of the
On the other hand, when there is a change in the security state (no in step S502), the processing proceeds as follows. When the behavior dirty on which a cache hit has occurred (i.e., when the data of the cache line has been updated) (yes in step S504), the data is written back to the
Further, in the case where a cache miss occurs on the virtual address requested by the processor core 10 (no in step S501), it is subsequently checked whether the
Then, when a cache miss occurs on the virtual address requested by the processor core 10 (no in step S501), or in the case where the security state of the processing of the
The techniques disclosed in this specification are similar to conventional techniques in that the mechanism in which the
Further, in the processing step S509, the
Here, in the case where the
Further, in the case where the
According to the memory access process shown in fig. 5 and 6, data in the
Further, according to the memory access procedures shown in fig. 5 and 6, as long as the security state of the processing performed in the
A method of expanding the tag area associated with each cache line may be cited as a method for storing the security state of each cache line in the
Fig. 7 shows an implementation example of the
The
In the example shown, the data array 71 comprises a data RAM in which 64 rows of 0 to 63 constitute a single memory bank. Again, four words constitute a single row. A single word corresponds to 32 bits. Thus, a single row corresponds to 128 bits.
The tag array 72 includes a tag RAM including a total of 64
Each row of the data array 71 is assigned a data RAM address. Also, each tag of the tag array 72 is assigned a tag RAM address. There is a correspondence between the data RAM address and the tag RAM address.
Each tag includes a valid bit and a dirty bit (dirty bit). The valid bit indicates whether the corresponding cache line is valid or invalid. The dirty bit indicates whether the data on the cache line has been updated. In this embodiment, the tag also assigns 3 bits to the security bit to indicate the security status.
By properly defining the security bits and the permission information, the security of the data in the cache memory can be achieved with the necessary granularity (see, e.g., FIG. 4). Furthermore, even in a processor core having only a simple security function, an advanced security model can be realized by combining an operating system and software.
Further, as other methods for storing the security state of each cache line in the
Note that bit compression may be performed when storing the security state of each cache line. In the above example, 3 bits are allocated for the secure state. However, in the case where only four types of values are used in actual operation, the values may be compressed to 2 bits and stored. Such compression/decompression processing may be implemented by using one or both of hardware and software.
In the above-described processes shown in fig. 5 and 6, in the case where the security state of the data for which the
Instead, a modification may be applied in which, even in the case where the security state of the memory access request from the
Fig. 8 and 9 each show a modification of a procedure for controlling memory access in the memory management system 1 in the form of a flowchart. In the illustrated process, even in the event of a security state mismatch, if access is allowed as a result of a protection check of MMU30, only the security state stored in
Note that the
The process begins in response to a memory access request issued by the
First, the
On the other hand, when there is a change in the security state (no in step S802), the MMU30 converts the virtual address into a physical address, and in addition, checks whether or not the memory access request from the
In the case where the MMU30 allows the memory access request from the processor core 10 (yes in step S814), the read processing or the write processing is performed on the cache line (step S815). After that, the security state of the data written to the cache line is stored in the tag (step S816), and the process ends. In other words, when there is a change in the security state of a memory access requested by the
Further, in the case where the MMU30 does not allow the memory access request from the processor core 10 (no in step S814), the processing proceeds as follows. When the cache line on which a cache hit has occurred is "dirty" (i.e., when the data of the cache line has been updated) ("yes" in step S804), the data is written back to the
Further, in the case where a cache miss occurs on the virtual address requested by the processor core 10 (no in step S801), it is subsequently checked whether the
Then, when a cache miss occurs on the virtual address requested by the processor core 10 (no in step S801), or in the case where the security state of the processing of the
In process step S809, the
Here, in the case where the
Further, in the case where the
According to the processes shown in fig. 8 and 9, even in the case where the security state of the memory access request from the
Further, in the case where a predetermined rule regarding permission exists, a modification in which determination regarding permission is made in the
Further, in the case where a predetermined rule regarding the permission exists, it is also possible to perform memory access control in which determination regarding the permission is made in the
For example, assume that the following predetermined rules exist in the cache memory 20: if write processing in the secure state has been allowed in the permission check performed by MMU30, processing "write" is also allowed for the secure state replaced by processing "read".
In particular, it is assumed that there are predefined rules that if the security state (developer, user, write) has been allowed by the
Alternatively, the predetermined rule may be a rule that allows a memory access request issued in a secure state with higher authority. For another example, "exclusive" is a security state that is higher in authority than "developer" and "privileged" is a security state that is higher in authority than "user". Therefore, even in the case where the security state transferred from the
Note that the function of controlling access to the
Fig. 10 and 11 each show a modification of a procedure for controlling memory access in the memory management system 1 in the form of a flowchart. In the illustrated procedure, in the case where the security states do not match, access to the
Note that the
The process begins in response to a memory access request issued by the
First, the
On the other hand, when there is a change in the security status (no in step S1002), it is checked whether the changed security status satisfies a predetermined rule existing in the cache memory 20 (step S1014). For example, it is checked whether the security state of a memory access requested by the
In a case where the change of the security state satisfies the predetermined rule (yes in step S1014), the read processing or the write processing is performed on the cache line (step S1015). Thereafter, the security status of the data written to the cache line is stored in the tag (step S1016), and the process ends. In other words, when there is a change in the security state of a memory access requested by the
Further, when the
Further, in the case where a cache miss occurs on the virtual address requested by the processor core 10 (no in step S1001), it is subsequently checked whether the
Then, when a cache miss occurs on the virtual address requested by the processor core 10 (no in step S1001), or in the case where the security state of the processing of the
In processing step S1009, the
Here, in the case where the
Further, in the case where the
According to the process shown in fig. 10 and 11. Even in the case where the security state of the memory access request from the
Note that, in addition to the above-described procedures (fig. 5 and 6, fig. 8 and 9, and fig. 10 and 11), the memory management system 1 may be configured such that when the security state of the
The memory management system 1 according to the present embodiment can reduce the amount of information to be stored in the
The techniques disclosed in this specification can be readily implemented simply by changing the design of the cache lines. Therefore, there is no need to add a sideband signal (for copying the license information) to the bus connecting the
Therefore, the technique disclosed in the present specification has the effect of reducing memory resources and control circuits for protecting data in a cache memory employing the virtual address caching method and improving power efficiency. Accordingly, the techniques disclosed in this specification may be suitably applied to ultra-low power IoT and wearable devices.
INDUSTRIAL APPLICABILITY
The technology disclosed in the present specification has been described above in detail with reference to specific embodiments. However, it is apparent that those skilled in the art can make modifications and substitutions to the embodiments without departing from the gist of the technology disclosed in the present specification.
The memory management technique disclosed in the present specification can be applied to, for example, an embedded device including only a small capacity memory, so that data in a cache memory employing a virtual address cache method can be protected with a small amount of information stored in the cache memory. Of course, the memory management technique disclosed in the present specification can be applied to various types of information processing apparatuses equipped with a general or large-capacity memory and employing a virtual address cache method.
In short, the technology disclosed in the present specification has been described by way of example, and the content described in the present specification should not be construed restrictively. In order to determine the gist of the technology disclosed in the present specification, the claims should be considered.
Note that the technique disclosed in this specification can also adopt the following configuration.
(1) A memory management system, comprising:
A cache memory temporarily storing data to which the processor core requests a memory access;
a state storage unit storing a security state transmitted concurrently with a request for the memory access from the processor core; and
and a memory management unit that manages access to the main memory.
(2) The memory management system according to the above (1), wherein,
the state storage unit stores a security state in units of cache lines of the cache memory.
(3) The memory management system according to the above (1) or (2), wherein,
the state storage unit includes any one of: a tag memory in the cache memory, a register in the cache memory provided separately from the tag memory, and a memory or register installed outside a cache line body, and the state storage unit stores a security state for each line of the cache memory.
(4) The memory management system according to any one of the above (1) to (3), wherein,
the memory management unit stores permission information in each entry of the page table in the translation bypass buffer, the permission information indicating whether access is allowed for each security state, and
The memory management unit determines whether to allow access to the secure state transmitted concurrently with the memory access request from the processor core based on the permission information stored in an entry of a hit request.
(5) The memory management system according to any one of the above (1) to (4), wherein,
in response to a request for the memory access from the processor core, the state storage unit writes data read out from the main memory after protection checking by the memory management unit to the cache memory, and stores the security state in association with a corresponding cache line.
(6) The memory management system according to any one of the above (1) to (5),
in the event that the security state when the processor core requests a memory access does not match the security state stored in the state storage unit, performing a cache flush on the cache line that hit the request.
(7) The memory management system according to any one of the above (1) to (5),
in the case where the security state when the processor core requests a memory access does not match a security state stored in the state storage unit, performing a protection check by the memory management unit, and in the case where the request for the memory access is allowed, accessing a cache line that hits the request and updating the security state stored in the state storage unit.
(8) The memory management system according to any one of the above (1) to (5),
in the case where, although the security state when the processor core requests a memory access does not match the security state stored in the state storage unit, the difference between the security states satisfies a predetermined rule within the cache memory, a cache line that hits the request is accessed and the security state stored in the state storage unit is updated.
(9) The memory management system according to any one of the above (1) to (5),
in the event that the secure state when the processor core requests a memory access is higher in authority than the secure state stored in the state storage unit, accessing a cache line that hits the request and updating the secure state stored in the state storage unit.
(10) The memory management system according to any one of the above (1) to (9), wherein,
the cache memory adopts a virtual address cache method.
(11) A memory management method, comprising:
a step of reading data, to which a memory access is requested by a processor core, from a main memory and temporarily storing the data in a cache memory;
A state storing step of storing a security state transmitted simultaneously with a request for memory access from the processor core; and
a control step of controlling access to the cache memory and the main memory based on a result of comparing the security state at the time when the processor core requests a memory access with a security state stored in a state storage unit.
(11-1) the memory management method according to the above (11), wherein,
in the case where the secure state of the memory access requested by the processor core does not match the secure state stored in the state storage unit, a cache flush is performed in the control step for the cache line that hit the request.
(11-2) the memory management method according to the above (11), wherein,
in the case where the processor core requests that the security state of the memory access does not match the security state stored in the state storage unit, the memory management unit performs a protection check, and in the case where the memory access request is permitted, accesses a cache line that hits the request and updates the security state stored in the state storage unit in the control step.
(11-3) the memory management method according to the above (11), wherein,
in the event that the secure state of the processor core requesting the memory access does not match the secure state stored in the state storage unit, but the difference between the secure states satisfies a predetermined rule within the cache memory, the access hits the requested cache line and updates the secure state stored in the state storage unit.
(12) An information processing apparatus comprising:
a processor core;
a main memory;
a cache memory temporarily storing data to which the processor core requests a memory access;
a state storage unit storing a security state transmitted simultaneously with a memory access request from the processor core; and
and a memory management unit that manages access to the main memory.
List of reference signs
1 memory management system
10 processor core
20 high speed buffer storage
30 MMU
31 TLB
32-page traversing mechanism
40 Main memory
41 Page Table
100 sensing device
101 CPU
102 MMU
103 SRAM
104 flash memory
105 sensor
106 communication module
107 cell
110 bus
200 base station
201 cloud
202 server.
- 上一篇:一种医用注射器针头装配设备
- 下一篇:远程I/O系统