Cache management method, device and equipment

文档序号:762507 发布日期:2021-04-06 浏览:12次 中文

阅读说明:本技术 缓存管理方法及装置、设备 (Cache management method, device and equipment ) 是由 崔泽汉 张克松 于 2020-12-17 设计创作,主要内容包括:本申请实施例提供缓存管理方法及装置、设备,其中,所述缓存管理方法包括:从向上级缓存发送的多个取指请求中进行采样;根据所述采样的结果确定采样取指请求,所述采样取指请求包括所述采样得到的取指请求的取指地址;发送所述采样取指请求至管理下级缓存的替换算法,以更新所述下级缓存中存储内容的被踢出优先级;其中,所述上级缓存优先于所述下级缓存被读取。本申请实施例中的技术方案有利于提升缓存管理方法的准确性。(The embodiment of the application provides a cache management method, a cache management device and cache management equipment, wherein the cache management method comprises the following steps: sampling from a plurality of instruction fetch requests sent to an upper level cache; determining a sampling instruction fetching request according to the sampling result, wherein the sampling instruction fetching request comprises an instruction fetching address of the instruction fetching request obtained by sampling; sending the sampling instruction-fetching request to a replacement algorithm for managing a lower-level cache so as to update the kicked-out priority of the content stored in the lower-level cache; wherein the upper level cache is read in preference to the lower level cache. The technical scheme in the embodiment of the application is beneficial to improving the accuracy of the cache management method.)

1. A method for cache management, comprising:

sampling from a plurality of instruction fetch requests sent to an upper level cache;

determining a sampling instruction fetching request according to the sampling result, wherein the sampling instruction fetching request comprises an instruction fetching address of the instruction fetching request obtained by sampling;

sending the sampling instruction-fetching request to a replacement algorithm for managing a lower-level cache so as to update the kicked-out priority of the content stored in the lower-level cache;

wherein the upper level cache is read in preference to the lower level cache.

2. The cache management method according to claim 1, wherein the sampling from the plurality of instruction requests sent to the upper-level cache comprises at least one of the following sampling modes:

sampling every other first preset number of instruction fetching requests;

the sampling is carried out every second preset number of clock cycles;

and recording a third preset number of historical instruction fetching requests, and if the new instruction fetching request is the same as the recorded historical instruction fetching request, performing sampling.

3. The cache management method according to claim 1, wherein the determining a sample fetch request according to the result of the sampling comprises:

and judging the sampling result, and determining the sampling instruction-fetching request based on the sampling result meeting a preset condition, wherein the preset condition is used for indicating the frequency of the accessed sampling instruction-fetching request.

4. The cache management method according to claim 1, wherein the determining a sample fetch request according to the result of the sampling comprises: judging the sampling result, and determining the sampling instruction-fetching request based on the sampling result meeting the preset condition; the preset condition comprises at least one of the following conditions:

the sampling result finger-taking address is different from the fourth preset number of sampling result finger-taking addresses;

the sampling result is taken as the address hit instruction cache;

the sampled result is taken to indicate that the instruction after the request decoding hits a micro instruction cache;

the sampled result fetch request is in a micro instruction cache fetch mode.

5. The cache management method according to claim 1, wherein the sending the sample fetch request to a replacement algorithm managing a lower level cache comprises: and sending the sampling instruction-fetching request to a replacement algorithm for managing the lower-level cache through a special interface of the replacement algorithm in the lower-level cache.

6. The cache management method according to claim 1, wherein the sending the sample fetch request to a replacement algorithm managing a lower level cache comprises: multiplexing a request interface between a higher-level cache and a lower-level cache, and sending the sampling instruction-fetching request to a replacement algorithm for managing the lower-level cache when the request interface is idle.

7. The cache management method according to claim 1, wherein sampling from the plurality of fetch requests sent to the upper level cache further comprises: and determining that the upper-level cache and the lower-level cache are in an inclusion relationship, wherein the inclusion relationship means that the stored content in the upper-level cache is all contained in the lower-level cache.

8. The cache management method according to claim 1, wherein after sending the sample fetch request to a replacement algorithm for managing a lower-level cache, the method further comprises:

when the sampling instruction-fetching request hits the lower-level cache, returning the hit cache block to the upper-level cache;

when the sampling instruction-fetching request is lost in the lower-level cache, continuing to request the pointed content of the sampling instruction-fetching request, storing the hit stored content in the continuing request process into the lower-level cache, and returning the hit stored content of the continuing request to the upper-level cache.

9. The cache management method according to claim 8, wherein said determining a sample fetch request according to the result of the sampling comprises: determining a source identifier in the sampling instruction fetching request, wherein the source identifier indicates that the sampling instruction fetching request is obtained according to the sampling; returning the hit cache block to the upper-level cache comprises returning the cache block with the source identifier to the upper-level cache; the hit storage content when returning the continuation request comprises the storage content with the source identifier returned to the upper-level cache; the cache management method further comprises the following steps: and discarding the cache block or the storage content received by the upper-level cache according to the source identifier.

10. The cache management method according to claim 1, wherein the determining a sample fetch request according to the result of the sampling comprises: determining a source identifier in the sampling instruction fetching request, wherein the source identifier indicates that the sampling instruction fetching request is obtained according to sampling; after sending the sampling instruction fetching request to a replacement algorithm for managing a lower-level cache, determining whether to continue to execute at least one of the following according to the source identifier:

when the sampling instruction fetching request is missing, continuing to request the pointed content of the sampling instruction fetching request;

and when the sampling instruction-fetching request hits or hits in the continuous request process, returning the hit content to the upper-level cache.

11. A cache management apparatus, comprising:

the sampling unit is suitable for sampling from a plurality of instruction fetching requests sent to the upper-level cache;

the sampling instruction-fetching request determining unit is suitable for determining a sampling instruction-fetching request according to the sampling result, wherein the sampling instruction-fetching request comprises an instruction-fetching address of the instruction-fetching request obtained by sampling;

the cache management updating unit is suitable for sending the sampling instruction-fetching request to a replacement algorithm for managing a lower-level cache so as to update the kicked-out priority of the content stored in the lower-level cache;

wherein the upper level cache is read in preference to the lower level cache.

12. The buffer management device of claim 11, wherein the sampling unit comprises at least one of the following sampling sub-units:

the first sampling subunit is suitable for sampling every other first preset number of instruction fetching requests;

the second sampling subunit is suitable for sampling every second preset number of clock cycles;

and the third sampling subunit is suitable for recording a third preset number of historical instruction fetching requests, and if the new instruction fetching request is the same as the recorded historical instruction fetching request, the sampling is carried out.

13. The cache management device according to claim 11, wherein the sampling instruction fetch request determination unit is adapted to determine a result of the sampling, and determine the sampling instruction fetch request based on the result of the sampling meeting a preset condition, wherein the preset condition is used for indicating how frequently the sampled instruction fetch request is accessed.

14. The cache management device according to claim 11, wherein the sampling instruction fetch request determining unit is adapted to determine a result of the sampling, and determine the sampling instruction fetch request based on the result of the sampling meeting a preset condition, wherein the preset condition includes at least one of:

the sampling result finger-taking address is different from the fourth preset number of sampling result finger-taking addresses;

the sampling result is taken as the address hit instruction cache;

the sampled result is taken to indicate that the instruction after the request decoding hits a micro instruction cache;

the sampled result fetch request is in a micro instruction cache fetch mode.

15. The cache management apparatus according to claim 11, wherein the cache management updating unit is adapted to send the sample fetch request to a replacement algorithm managing a lower level cache through a dedicated interface of the replacement algorithm in the lower level cache.

16. The apparatus according to claim 11, wherein the cache management updating unit is adapted to multiplex a request interface between a higher-level cache and a lower-level cache, and send the sample fetch request to a replacement algorithm managing the lower-level cache when the request interface is idle.

17. The cache management device according to claim 11, further comprising: the device comprises a containing relation determining unit and a judging unit, wherein the containing relation determining unit is suitable for determining that the upper-level cache and the lower-level cache are in containing relation before sampling is carried out in a plurality of instruction fetching requests sent to the upper-level cache, and the containing relation means that all stored contents in the upper-level cache are contained in the lower-level cache.

18. The cache management device according to claim 11, further comprising: a miss processing unit, adapted to continue to request the cache block pointed by the sample fetch request when the sample fetch request is missing in the lower level cache, and store the hit storage content in the lower level cache;

and the returning unit is suitable for returning the hit cache block to the upper-level cache when the sampling instruction fetching request hits the lower-level cache, or returning the hit storage content to the upper-level cache when the lower-level cache is lost.

19. The cache management apparatus according to claim 18, wherein the cache management updating unit comprises a source identification unit adapted to determine a source identification in the sample fetch request, where the source identification indicates that the sample fetch request is obtained according to the sampling; the hit storage content when returning the continuation request comprises the storage content with the source identifier returned to the upper-level cache; the return unit is suitable for returning the cache block or the storage content with the source identification to the upper-level cache; the cache management device further comprises: and the discarding unit is suitable for discarding the cache block or the storage content received by the upper-level cache according to the source identifier.

20. The cache management apparatus according to claim 11, wherein the cache management updating unit includes a source identification unit adapted to determine a source identification in the sample fetch request, where the source identification indicates that the sample fetch request is obtained according to the sampling; the cache management device further comprises a continuous execution judging unit which is suitable for determining whether to continuously execute at least one of the following according to the source identifier: when the sampling instruction fetching request is missing, continuing to request the pointed content of the sampling instruction fetching request; and when the sampling instruction-fetching request hits or hits in the continuous request process, returning the hit content to the upper-level cache.

21. A processor comprising a cache management apparatus as claimed in any one of claims 11 to 20.

22. A computing device comprising the processor of claim 21.

23. A computing device comprising sampling logic adapted to sample from a plurality of fetch requests sent to an upper level cache; determining a sampling instruction fetching request according to the sampling result, wherein the sampling instruction fetching request comprises an instruction fetching address of the instruction fetching request obtained by sampling; sending the sampling instruction-fetching request to a replacement algorithm for managing a lower-level cache so as to update the kicked-out priority of the content stored in the lower-level cache; wherein the superior cache and the inferior cache are caches of the computing device, the superior cache being read in preference to the inferior cache.

24. The computing device of claim 23, wherein the upper level cache comprises an instruction cache, and wherein the sampling logic is located in the instruction cache.

25. The computing device of claim 23, wherein the computing device comprises a branch prediction unit, and wherein the sampling logic is located in the branch prediction unit.

26. A computer device comprising a memory and a processor, the memory having stored thereon a computer program operable on the processor, wherein the processor, when executing the computer program, performs the cache management method of any of claims 1 to 10.

27. A computer-readable storage medium on which a computer program is stored, wherein the computer program is executed to perform the information extraction method in the cache management method according to any one of claims 1 to 10.

Technical Field

The embodiment of the application relates to the field of integrated circuits, in particular to a cache management method, a cache management device and cache management equipment.

Background

In a computing device, the method for storing data and addresses which need to be accessed frequently by using a cache is a method for effectively improving the running speed of the computing device. The data in the cache is managed, so that the data in the cache is more frequently used and is an important target in cache management.

There are a number of caching algorithms that can be used to manage the cache, which are also commonly referred to as cache replacement algorithms or cache replacement policies, which may also be referred to as replacement algorithms. .

However, it is difficult to accurately manage the cache only by means of the existing caching algorithm.

Content of application

In view of this, an embodiment of the present application provides a cache management method, including:

sampling from a plurality of instruction fetch requests sent to an upper level cache;

determining a sampling instruction fetching request according to the sampling result, wherein the sampling instruction fetching request comprises an instruction fetching address of the instruction fetching request obtained by sampling;

sending the sampling instruction-fetching request to a replacement algorithm for managing a lower-level cache so as to update the kicked-out priority of the content stored in the lower-level cache;

wherein the upper level cache is read in preference to the lower level cache.

Optionally, the sampling from the multiple instruction fetching requests sent to the upper-level cache includes at least one of the following sampling modes:

sampling every other first preset number of instruction fetching requests;

the sampling is carried out every second preset number of clock cycles;

and recording a third preset number of historical instruction fetching requests, and if the new instruction fetching request is the same as the recorded historical instruction fetching request, performing sampling.

Optionally, the determining a sampling instruction fetch request according to the sampling result includes:

and judging the sampling result, and determining the sampling instruction-fetching request based on the sampling result meeting a preset condition, wherein the preset condition is used for indicating the frequency of the accessed sampling instruction-fetching request.

Optionally, the determining a sampling instruction fetch request according to the sampling result includes: judging the sampling result, and determining the sampling instruction-fetching request based on the sampling result meeting the preset condition; the preset condition comprises at least one of the following conditions:

the sampling result finger-taking address is different from the fourth preset number of sampling result finger-taking addresses;

the sampling result is taken as the address hit instruction cache;

the sampled result is taken to indicate that the instruction after the request decoding hits a micro instruction cache;

the sampled result fetch request is in a micro instruction cache fetch mode.

Optionally, the sending the sample fetch request to a replacement algorithm managing a lower-level cache includes: and sending the sampling instruction-fetching request to a replacement algorithm for managing the lower-level cache through a special interface of the replacement algorithm in the lower-level cache.

Optionally, the sending the sample fetch request to a replacement algorithm managing a lower-level cache includes: multiplexing a request interface between a higher-level cache and a lower-level cache, and sending the sampling instruction-fetching request to a replacement algorithm for managing the lower-level cache when the request interface is idle.

Optionally, before sampling from the multiple instruction fetch requests sent to the upper-level buffer, the method further includes: and determining that the upper-level cache and the lower-level cache are in an inclusion relationship, wherein the inclusion relationship means that the stored content in the upper-level cache is all contained in the lower-level cache.

Optionally, the cache management method further includes: after sending the sample fetch request to a replacement algorithm managing a lower level cache:

when the sampling instruction-fetching request hits the lower-level cache, returning the hit cache block to the upper-level cache;

when the sampling instruction-fetching request is missing in the lower-level cache, the pointed content of the sampling instruction-fetching request is continuously requested, the storage content hit in the continuous request process is stored in the lower-level cache, and the cache block hit in the continuous request process is returned to the upper-level cache.

Optionally, the determining a sampling instruction fetch request according to the sampling result includes: determining a source identifier in the sampling instruction fetching request, wherein the source identifier indicates that the sampling instruction fetching request is obtained according to the sampling; returning the hit cache block to the upper-level cache comprises returning the cache block with the source identifier to the upper-level cache; the hit storage content when returning the continuation request comprises the storage content with the source identifier returned to the upper-level cache; the method further comprises the following steps: and discarding the hit cache block or the storage content received by the upper-level cache according to the source identifier.

Optionally, the determining a sampling instruction fetch request according to the sampling result includes: determining a source identifier in the sampling instruction fetching request, wherein the source identifier indicates that the sampling instruction fetching request is obtained according to sampling; after sending the sampling instruction fetching request to a replacement algorithm for managing a lower-level cache, determining whether to continue to execute at least one of the following according to the source identifier:

when the sampling instruction fetching request is missing, continuing to request the pointed content of the sampling instruction fetching request;

and when the sampling instruction-fetching request hits or hits in the continuous request process, returning the hit content to the upper-level cache.

An embodiment of the present application further provides a cache management apparatus, including:

the sampling unit is suitable for sampling from a plurality of instruction fetching requests sent to the upper-level cache;

the sampling instruction-fetching request determining unit is suitable for determining a sampling instruction-fetching request according to the sampling result, wherein the sampling instruction-fetching request comprises an instruction-fetching address of the instruction-fetching request obtained by sampling;

the cache management updating unit is suitable for sending the sampling instruction-fetching request to a replacement algorithm for managing a lower-level cache so as to update the kicked-out priority of the content stored in the lower-level cache;

wherein the upper level cache is read in preference to the lower level cache.

Optionally, the sampling unit includes at least one of the following sampling sub-units:

the first sampling subunit is suitable for sampling every other first preset number of instruction fetching requests;

the second sampling subunit is suitable for sampling every second preset number of clock cycles;

and the third sampling subunit is suitable for recording a third preset number of historical instruction fetching requests, and if the new instruction fetching request is the same as the recorded historical instruction fetching request, the sampling is carried out.

Optionally, the sampling instruction fetch request determining unit is adapted to determine the result of the sampling, and determine the sampling instruction fetch request based on the result of the sampling meeting a preset condition, where the preset condition is used to indicate how frequently the sampled instruction fetch request is accessed.

Optionally, the sampling instruction fetch request determining unit is adapted to determine a result of the sampling, and determine the sampling instruction fetch request based on the result of the sampling meeting a preset condition, where the preset condition includes at least one of:

the sampling result finger-taking address is different from the fourth preset number of sampling result finger-taking addresses;

the sampling result is taken as the address hit instruction cache;

the sampled result is taken to indicate that the instruction after the request decoding hits a micro instruction cache;

the sampled result fetch request is in a micro instruction cache fetch mode.

Optionally, the cache management updating unit is adapted to send the sample fetch request to a replacement algorithm managing a lower-level cache through a dedicated interface of the replacement algorithm in the lower-level cache.

Optionally, the cache management updating unit is adapted to multiplex a request interface between a higher-level cache and a lower-level cache, and send the sampling fetch request to a replacement algorithm for managing the lower-level cache when the request interface is idle.

Optionally, the cache management apparatus further includes: the device comprises a containing relation determining unit and a judging unit, wherein the containing relation determining unit is suitable for determining that the upper-level cache and the lower-level cache are in containing relation before sampling is carried out in a plurality of instruction fetching requests sent to the upper-level cache, and the containing relation means that all stored contents in the upper-level cache are contained in the lower-level cache.

Optionally, the cache management apparatus further includes:

a miss processing unit, adapted to continue to request the pointed content of the sample fetch request when the sample fetch request is missing in a lower level cache, and store the hit storage content in the lower level cache;

and the returning unit is suitable for returning the hit cache block to the upper-level cache when the sampling instruction fetching request hits the lower-level cache, or returning the hit memory content to the upper-level cache when the lower-level cache is lost.

Optionally, the cache management updating unit includes a source identifier unit adapted to determine a source identifier in the sampling instruction fetch request, where the source identifier indicates that the sampling instruction fetch request is obtained according to the sampling; the return unit is suitable for returning the cache block with the source identifier to the upper-level cache; the cache management device further comprises: and the discarding unit is suitable for discarding the cache block received by the upper-level cache according to the source identifier.

Optionally, the cache management updating unit includes a source identifier unit adapted to determine a source identifier in the sampling instruction fetch request, where the source identifier indicates that the sampling instruction fetch request is obtained according to the sampling; the cache management device further comprises a continuous execution judging unit which is suitable for determining whether to continuously execute at least one of the following according to the source identifier: when the sampling instruction fetching request is missing, requesting the pointed content of the sampling instruction fetching request; and when the sampling instruction-fetching request hits or hits in the continuous request process, returning the hit content to the upper-level cache.

An embodiment of the present application further provides a processor, where the processor includes the foregoing cache management device.

The embodiment of the application also provides a computing device, and the computing device comprises the processor.

The embodiments of the present application further provide another computing device, which includes sampling logic adapted to sample from a plurality of fetch requests sent to an upper level cache; determining a sampling instruction fetching request according to the sampling result, wherein the sampling instruction fetching request comprises an instruction fetching address of the instruction fetching request obtained by sampling; sending the sampling instruction-fetching request to a replacement algorithm for managing a lower-level cache so as to update the kicked-out priority of the content stored in the lower-level cache; wherein the superior cache and the inferior cache are caches of the computing device, the superior cache being read in preference to the inferior cache.

Optionally, the upper-level cache includes an instruction cache, and the sampling logic is located in the instruction cache.

Optionally, the computing device comprises a branch prediction unit, the sampling logic being located in the branch prediction unit.

An embodiment of the present application further provides a computer device, including a memory and a processor, where the memory stores a computer program that can be executed on the processor, and the processor executes the cache management method according to the claims when executing the computer program.

The embodiment of the present application further provides a computer-readable storage medium, on which a computer program is stored, where the computer program executes the information extraction method in the foregoing cache management method when running.

According to the technical scheme in the embodiment of the application, the sampling instruction-taking requests are determined by sampling the instruction-taking requests sent to the upper-level cache, and the sampling instruction-taking requests are sent to the lower-level cache, so that the instruction-taking requests which hit the upper-level cache are considered when the replacement algorithm for managing the lower-level cache updates the kicked priority of the stored content in the lower-level cache, and therefore the influence on the upper-level cache caused by the fact that the stored content kicked out of the lower-level cache by the hitting instruction of the upper-level cache is ignored for many times under the condition that the stored content in the upper-level cache is associated with the stored content in the lower-level cache is avoided.

Drawings

In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.

FIG. 1 is a schematic diagram of a storage structure of a computing device;

FIG. 2 is a schematic block diagram of a portion of the hardware abstraction of a computing device in which memory resides;

FIG. 3 is a flowchart of a cache management method according to an embodiment of the present application;

FIG. 4 is a simplified block diagram of a portion of a processor;

FIG. 5 is a simplified block diagram of a portion of a processor in an embodiment of the present application;

FIG. 6 is a flow chart of another cache management method according to an embodiment of the present application;

fig. 7 is a schematic structural diagram of a cache management apparatus according to an embodiment of the present application;

FIG. 8 is a schematic structural diagram of a sample fetch request determining unit according to an embodiment of the present application;

fig. 9 is a schematic structural diagram of another cache management apparatus according to an embodiment of the present application;

fig. 10 is a schematic structural diagram of another cache management apparatus according to an embodiment of the present application;

fig. 11 is a schematic structural diagram of another cache management apparatus according to an embodiment of the present application;

fig. 12 is a schematic structural diagram of another cache management apparatus according to an embodiment of the present application;

FIG. 13 is a block diagram of a computer system architecture.

Detailed Description

The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.

As described in the background, in a computing device, using a cache to store data and addresses that need to be accessed frequently is an effective way to increase the operating speed of the computing device.

The computing devices herein are not limited to computer systems, but may be other devices such as handheld devices and devices with embedded applications; some examples of handheld devices include cellular phones, internet protocol devices, digital cameras, Personal Digital Assistants (PDAs), or handheld PCs (personal computers). Other devices with embedded applications may include network computers (Net PCs), set-top boxes, servers, Wide Area Network (WAN) switches, or any other system that can execute one or more instructions of at least one of the presently disclosed embodiments.

FIG. 1 is a schematic diagram of a storage structure of a computing device. In the computing device, the register 11 is a register in a Central Processing Unit (CPU), and has the fastest read-write speed, the second highest read-write speed of a cache (12), and the slowest read-write speed of a main memory (13). In one current implementation, the read/write speed of the register 11 is hundreds of times faster than the main memory 13. If data is directly read from the main memory 13, a long wait time is required, which results in waste of resources.

Since the main memory 13 has a large capacity, it is expensive if it is a memory with a higher speed. One solution is to provide a cache 12 with a significantly higher read/write speed than the main memory 13 between the register 11 and the main memory 13, and when data needs to be read (load) from the main memory, first, it is searched whether the data corresponding to the address is already in the cache 12, where the cache 12 is also called a cache.

In particular implementations, the cache may be multi-level, and different CPUs or processing cores (cores) in the CPUs may share the cache, or a private cache may be provided for each CPU or processing Core in the CPU.

FIG. 2 is a schematic block diagram of a portion of the hardware abstraction of a computing device in which memory resides. Two first-level caches (L1 caches) 21 are shown in fig. 2, and are respectively located in two CPUs 24, two CPUs 24 share one second-level cache (L2 cache)22, and two CPUs 24 are located in the same CPU cluster (cluster)25, although not shown in the figure, a plurality of CPU clusters 25 may share one third-level cache (L3 cache)23, and interact with the main memory 27 through the bus 26.

It is understood that the number of levels in the first level cache, the second level cache, the third level cache, and the location of the different levels of caches in the above caches are only examples, and are not limiting to the embodiments of the present application.

In one prior art implementation, the read-write speed of the registers within the CPU may be less than 1 ns; the read-write speed of the first-level cache in the cache can be 1ns, the read-write speed of the second-level cache can be 3ns, and the read-write speed of the third-level cache can be 12 ns; the read and write speed of the main memory may be 65 ns.

The above read and write speeds are examples only. It can be understood by this example that the cache may include different levels, the read-write speed of the upper level cache is greater than that of the lower level cache, and the upper level cache and the lower level cache are relative concepts. For example, the first-level cache is a higher-level cache of the second-level cache in the previous example, and the second-level cache is a lower-level cache of the first-level cache. The second level cache is a higher level cache of the third level cache, and the third level cache is a lower level cache of the second level cache. The read-write speed of the upper-level cache is higher, the data in the upper-level cache is read preferentially, and the lower-level cache is read when the data is missing (miss), so that the resource utilization rate is improved, and the processing efficiency is improved.

As described above, the capacity of the cache is limited, and how to manage the data or instructions stored in the cache and improve the hit rate of the data or instructions in the cache is an important issue in the art.

An algorithm which can be used for managing the Cache is a replacement algorithm described in the background art, namely a Cache replacement policy (Cache replacement policies), the priority of data in the Cache is managed through the replacement algorithm, a Cache block with the highest priority is kicked out of the Cache first, and the data in the Cache is updated. Various replacement algorithms that may be implemented by those skilled in the art may be implemented as specific implementations of the replacement algorithm herein, such as Least Recently Used (LRU) algorithm, Most Recently Used (MRU) algorithm, LRU Insertion Policy (LIP), Re-Reference Interval Prediction algorithm (RRIP), etc.

The following description takes the least recently used algorithm as an example. The core idea of the least recently used algorithm is: if the storage content is accessed recently, the probability of being accessed in the future is higher, and the optimization algorithm belongs to a time dimension. One implementation is to kick out memory contents according to access time settings, with the kicked out priority of newly joined memory contents being lower than old memory contents, and the kicked out priority of accessed memory contents being lower than non-accessed memory contents. When the storage space under the LRU management reaches the upper limit, the storage content with the highest kicking priority is kicked out.

In a specific implementation, the lower level cache may be managed using an LRU algorithm. For example, the LRU algorithm may manage the secondary cache. The stored content may be data or instructions. The minimum granularity to manage the Cache may be a Cache Block (Cache Block).

It is to be understood that the LRU algorithm for managing the lower-level cache refers to an algorithm conforming to the above-described LRU algorithm idea. The specific implementation of the method can be an algorithm optimized based on the above idea. The following description is continued by taking the LRU algorithm as an example.

As described above, the access priority to the upper-level cache is higher, and if the hit occurs when the upper-level cache is read, the reading is finished, and the access to the lower-level cache is not continued. When the lower-level cache is managed by the LRU algorithm, the read due to the end of hit is not reflected. However, in some application scenarios, the content stored in the lower-level cache is related to the content stored in the upper-level cache, and the content kicked out due to multiple times of neglecting the hit instruction to the upper-level cache may affect the content stored in the upper-level cache, thereby causing an error. Such errors are more significant when the stored content is an instruction. Because the overhead of a data cache miss can be somewhat masked by an out-of-order execution processor, an instruction cache miss necessarily results in a blocked pipeline, and more overhead if the pipeline needs to be flushed.

Take the upper level cache as the first level cache and the lower level cache as the second level cache as an example. The LRU algorithm update of the level two cache is not based on real access of the program, but based on partial access of the level one cache miss, which exacerbates the above problem. For example, a section of instruction is executed in a loop, the access is very frequent, but the first-level cache may be completely hit, and at this time, the second-level cache cannot see the access (cannot hit) of the section of instruction, so in the LRU algorithm of the second-level cache, the CacheBlock corresponding to the section of instruction cannot get the chance of "kicked priority down", and when other cacheblocks in the same group hit, the priority of replacement of the CacheBlock corresponding to the section of instruction is gradually increased until the CacheBlock becomes the highest and is finally replaced by the second-level cache.

As previously mentioned, such errors are more significant when the stored content is an instruction. One implementation way to solve the above problem is to adopt a Cache Block (i.e., an instruction CacheBlock) policy biased to store instructions in an LRU algorithm that manages a lower-level Cache. For example, the magnitude of the throttling down is greater each time an instruction CacheBlock hits in the second level cache, which "kicked priority is throttled down". Or, for example, when the command CacheBlock and the data CacheBlock have the same "kicked priority," the data CacheBlock is selected to be kicked out. However, the fundamental problem cannot be solved by the improvement on the replacement algorithm, namely, the replacement algorithm updating of the second-level cache is based on partial access of the first-level cache miss, and in an extreme case, if the first-level cache completely hits, the second-level cache finally kicks out the instruction CacheBlock. The aforementioned errors may still be caused.

Although the LRU algorithm is described as an example, it is understood that similar problems occur when other replacement algorithms are used for cache management.

An embodiment of the present application provides a cache management method, which, with reference to fig. 3, may specifically include the following steps:

step S31, sampling from a plurality of instruction fetch requests sent to the upper level cache;

step S32, determining a sampling instruction fetching request according to the sampling result, wherein the sampling instruction fetching request comprises the instruction fetching address of the instruction fetching request obtained by sampling;

step S33, sending the sampling instruction-fetching request to a replacement algorithm for managing a lower-level cache so as to update the kicked-out priority of the content stored in the lower-level cache;

wherein the upper level cache is read in preference to the lower level cache. The kick-out of the stored content is also referred to as the kick-out of the cache block in this application, with the cache block as a unit.

In the embodiment of the application, a plurality of instruction fetching requests sent to a higher-level cache are sampled, the sampling instruction fetching requests are determined, and the sampling instruction fetching requests are sent to a lower-level cache, so that a replacement algorithm for managing the lower-level cache can take the instruction fetching requests which hit the higher-level cache into consideration when the kicked priority of the stored content in the lower-level cache is updated, and therefore the influence on the higher-level cache caused by kicking the stored content of the lower-level cache out due to the fact that the hit instructions of the higher-level cache are ignored for many times under the condition that the stored content in the higher-level cache is associated with the stored content in the lower-level cache is avoided.

Embodiments of the application may be used for processor cores using pipelining. Modern processors typically employ pipelining to process instructions in parallel to speed up instruction processing efficiency; to avoid the Branch direction determination waiting for the results of Branch instruction execution when processing Branch instructions, most modern processors employ Branch Prediction (Branch Prediction) techniques.

FIG. 4 is a simplified block diagram of a portion of a processor, the processor shown in FIG. 4 including a processor core that uses pipelining. In the example shown in FIG. 4, the level one cache is divided into an instruction cache 42 and a data cache 45, and the level two cache 44 is instruction and data shared. The branch prediction unit 41 generates a fetch request to fetch an instruction from the instruction cache 42 for subsequent decode, launch, execute, access, commit units 43, and if it is an access instruction, it also needs to access the data cache 45. The second level cache 44 is connected to the instruction cache 42 and the data cache 45. A bidirectional request interface is respectively arranged between the instruction cache 42 and the data cache 45 and the second-level cache 44, and when any one or two of the instruction cache 42 and the data cache 45 are accessed and lost, a request is sent to the second-level cache 44; second level cache 44 may also send requests to either or both of instruction cache 42 and data cache 45 if cache coherency needs to be handled. There is a unidirectional data path between instruction cache 42 and second level cache 44 because instruction cache 42 is read-only and does not require data to be written back to second level cache 44; there is a bi-directional data path between data cache 45 and second level cache 44 because data cache 45 is readable and writeable, and overwritten data in data cache 45 may be written back to second level cache 44.

The decoding, transmitting, executing, accessing and submitting unit 43 is a simplified description and is not a limitation to the structure thereof. The method specifically comprises a decoding unit, a transmitting unit, an executing unit, a memory access unit, a submitting unit and the like. Additionally, it should also be appreciated that FIG. 4 is a simplified schematic block diagram of only a portion of a processor and is not a limitation on hardware implementations.

In a specific implementation, the relationship between the upper-level cache and the lower-level cache may be various, for example, an Inclusive (Inclusive) relationship, an Exclusive (Exclusive) relationship, or a Non-Inclusive Non-Exclusive (Exclusive) relationship. The Inclusive means that CacheBlock in an upper-level instruction cache and an upper-level data cache is inevitably in a lower-level cache; exclusive means that the CacheBlock in the upper-level instruction cache and the upper-level data cache is not necessarily in the lower-level cache; Non-Inclusivenon-Exclusive is a relationship between the two.

Wherein, the containment relationship can be generally maintained by: when the CacheBlock is fetched into the first-level instruction or data cache, the CacheBlock is also written into the second-level cache; when the CacheBlock is kicked out of the second level cache, it is also kicked out of the first level instruction or data cache.

In the three relationships, the relationship between the data stored in the upper-level cache and the data stored in the lower-level cache is the closest, and if the instruction fetch request of the upper-level cache is ignored by the algorithm for managing the lower-level cache, the data in the lower-level cache is kicked out, so that the influence on the upper-level cache is the greatest.

For example, to maintain the Inclusive relationship: when the CacheBlock is kicked out of the second-level cache, the corresponding CacheBlock needs to be kicked out of the first-level cache. Furthermore, in order to support Self-modifying instructions (Self-modifying-code) by hardware, if the kicked Cache Block is already fetched into the pipeline, the pipeline needs to be flushed to restart fetching, and resource waste is large.

When the upper-level cache and the lower-level cache are in other two relationships, especially an Exclusive relationship, the kickout operation of CacheBlock in the lower-level cache has little influence on the upper-level cache. In a specific implementation, it may be determined that the upper-level cache and the lower-level cache are in an inclusion relationship, and then the steps shown in fig. 3 are performed. Thus, resource saving is facilitated.

In a specific implementation, the upper-level cache may be the instruction cache 42 in fig. 4, and sampling from the plurality of instruction fetch requests sent to the upper-level cache may be sampling the instruction fetch requests output by the branch prediction unit 41. Specifically, if an instruction queue exists at the interface where branch prediction unit 41 sends the instruction fetch to instruction cache 42, sampling may be performed at the entry or exit of the instruction queue.

It is to be understood that sampling from multiple instruction requests sent to the upper-level cache may also be sampling instruction requests in other processor structures, for example, sampling instruction requests from a processor that does not employ branch prediction technology, or sampling instruction requests received by a first-level cache in which an instruction cache and a data cache are integrated, or other sampling manners, which are not limited herein.

Referring collectively to fig. 5, fig. 5 is a simplified block diagram of a portion of a processor in an embodiment of the present application. In particular implementations, sampling from multiple fetch requests may be implemented by sampling logic 51. In terms of a specific hardware implementation, the sampling logic 51 may be implemented in the branch prediction unit 41, or may also be implemented in the instruction cache 42.

In particular, the time to sample from multiple fetch requests sent to the upper level cache may be determined in a variety of ways, as described below.

In an embodiment of the present application, the sampling may be performed every first preset number of instruction fetching requests. The specific value of the first preset number may be determined by evaluating the performance of the processor. If the first preset number is too large, the kicked priority of the stored content in the lower-level cache cannot be updated in time by the replacement algorithm in the lower-level cache, and if the first preset number is too small, excessive sampling is performed on the lower-level cache, so that the bandwidth and power consumption of the lower-level cache are wasted. Further, the first predetermined amount may take 64, 128, etc.

In yet another embodiment of the present application, the sampling may be performed every second predetermined number of clock cycles. Similar to the first predetermined amount, the specific value of the second predetermined amount can also be determined by evaluating the performance of the processor, and the influence of setting too large or too small is also similar to the first predetermined amount. The first predetermined amount may be equal to 64, 128, etc. The second predetermined amount may also be 64, 128, etc.

In another embodiment of the present application, a third preset number of historical instruction fetching requests may be recorded, and if a new instruction fetching request is the same as the recorded instruction fetching request, the sampling is performed. The greater the hardware overhead due to the greater number of records for historical fetch requests. If the third predetermined number is too small, sampling is too small, which may result in too low a sampling frequency of the plurality of instruction fetch requests to be transmitted to the upper-level cache. The third preset number may be determined based on a trade-off result of implementation effect and hardware overhead. For example, the third predetermined number may take the value of 16, 32.

Taking the third preset number of 16 as an example, 16 most recently occurring instruction fetch requests may be recorded. Since the generated instruction fetch requests are history instruction fetch requests, 16 most recently generated instruction fetch requests, that is, 16 most recently generated history instruction fetch requests, are recorded. When the new fetch request is the same as any of the stored 16 historical fetch requests, the new fetch request may be sampled. By recording the historical instruction fetching request, the new instruction fetching request which is the same as the recorded historical instruction fetching request is triggered to sample, so that the condition that the same instruction is repeatedly ignored hitting a superior cache when the instruction fetching request is circularly executed can be avoided, and the influence of kicking the instruction out of the storage content of the subordinate cache on the superior cache is further avoided.

In a specific implementation, the sampling manners may be used alone or in combination, and are not limited herein.

In one implementation, the sample fetch request is a fetch request, which is referred to as a sample fetch request because it is not the original fetch request generated according to the processor process flow. The sampling instruction fetching request is determined according to the sampling result, and only the instruction fetching address of the sampled instruction fetching request can be included in the sampling result. The result of the sampling may also include other information of the sampled fetch request, such as the contents of the fetch pattern. The sampled instruction fetch request may coincide with the sampled instruction fetch request. Alternatively, the sampled instruction fetch request may include at least an instruction fetch address corresponding to the sampled instruction fetch request, and may further include other indicative information, such as an instruction fetch mode, a source identifier, and the like.

In specific implementation, the result obtained by sampling, that is, the result of sampling, may be determined, and the sampling instruction fetch request may be determined based on the result of sampling that meets the preset condition.

Furthermore, in a specific implementation, the sampling result may also include a sampled instruction fetch request, and it may be determined whether the sampled instruction fetch request hits in the micro instruction cache according to the sampling result; alternatively, the result of sampling may also include the instruction fetch mode in which the instruction fetch request is located.

In a specific implementation, the preset condition may be used to determine how frequently the fetch request corresponding to the sample is accessed. The access request with low access frequency degree has small influence on the updating of the lower-level cache, and the sampling instruction-fetching request is determined according to the sampling result of the access request with high access frequency degree by judging the access frequency degree of the sampled instruction-fetching request, so that the excessive occupation of the lower-level cache resource can be avoided. The manner in which access to the instruction fetch requests is determined to be frequent may vary,

in an embodiment of the present application, it may be determined whether an instruction decoded from a sampled instruction fetch request hits in a micro instruction cache, and whether a predetermined condition is met. If the instruction decoded by the sampled instruction fetching request hits the micro instruction cache, the frequency of accessing the instruction fetching request is high, the instruction fetching request is judged to meet the preset condition, and the sampled instruction fetching request can be generated according to the sampling result of the instruction fetching request.

The micro instruction Cache (Op Cache) is parallel to the instruction Cache, and is also a Cache searched by a physical address, and the decoded instruction is stored, so that hit/miss situations may occur. Generally, the micro instruction cache is smaller in size than the instruction cache, storing the most frequently used instructions. If the sampled instruction misses in the micro-instruction cache, indicating that the fetch address in the sampled fetch request may not be used most frequently, the result of the sampling may be discarded.

In another embodiment of the present application, whether a preset condition is met may be determined by determining an instruction fetching mode corresponding to a sampled instruction fetching request. If the sampled instruction fetching request is in an instruction cache instruction fetching mode when being sampled, judging that the instruction fetching request does not meet the preset condition; otherwise, the judgment is that the preset condition is met. For example, if the sampled instruction fetch request is in a micro instruction cache instruction fetch mode when sampled, the sampled instruction fetch request may be determined based on the result of the sampling.

As previously described, the micro instruction cache is smaller in size than the instruction cache, storing the most frequently used instructions. In some processors, fetching an instruction from an instruction cache is in a different mode than fetching an instruction from a micro instruction cache. A micro instruction cache hit/miss is for a single fetch address, and multiple addresses in succession are typically hit in the micro instruction cache, which enters a micro instruction cache fetch mode. Therefore, if the sampled instruction fetching request is in the micro instruction cache instruction fetching mode, the instruction is frequently accessed; if the sampled instruction fetch request is in the instruction cache instruction fetch mode, it indicates that it is accessed infrequently, and the result of the sampling may be discarded.

In specific implementation, a preset condition may be set to screen out a result that satisfies a non-repetitive sampling within a certain range. For example, a fourth preset number may be set, and if the finger address in the result of sampling is repeated with the finger address in the result of sampling of the previous fourth preset number, it is determined that the preset condition is not met, and the result of sampling may be discarded; if the instruction fetch address in the sampling result is not repeated with the instruction fetch address in the fourth preset number of sampling results, the judgment is that the preset condition is met, and the sampling instruction fetch request can be generated based on the sampling result.

Similar to the determination of the third preset number, the fourth preset number may be determined by performance evaluation and hardware overhead balance. For example, the fourth preset number may specifically take 4 or 8. Taking the fourth preset number of 4 as an example, the results of the 4 most recent samples can be recorded. And when the value-taking address in the new sampling result is the same as any one of the stored value-taking addresses in the 4 sampling results, discarding the sampling result. By saving the result of the fourth preset number of samples, frequent sample fetch requests repeated to the send fetch address of the lower-level cache can be avoided.

In a specific implementation, the preset condition may include a determination of whether the sampled instruction fetch request hits, and if the sampled instruction fetch request hits in the instruction cache, it is determined that the preset condition is met, and if the sampled instruction fetch request misses in the instruction cache, it is determined that the preset condition is not met, and the result of the sampling is discarded. If the sampled instruction fetch request is missing in the instruction cache, the sampled instruction fetch request itself is sent to the lower-level cache, and at this time, if the sampled instruction fetch request is obtained based on the result of the sampling, the lower-level cache receives repeated instruction fetch requests, which causes waste of bandwidth and power consumption of the lower-level cache.

In specific implementations, the preset conditions can be used alone or in combination, and are not limited herein.

In particular implementations, the replacement algorithm that sends the sample fetch request to the lower level cache may be implemented in a variety of ways.

In an embodiment of the present application, the sample fetch request may be sent to a replacement algorithm managing a lower-level cache through a dedicated interface of the replacement algorithm in the lower-level cache. The dedicated interface may be an interface that is newly built on the hardware basis of an existing processor.

For example, with continued reference to FIG. 5, in one particular implementation, the dedicated interface may directly connect the sampling logic 51 and the replacement algorithm in the level two cache 44. Through this dedicated interface, the sample fetch request generated by the sampling logic 51 may be sent directly to the replacement algorithm in the secondary cache 44.

In another embodiment of the present application, a request interface between an upper-level cache and a lower-level cache may be multiplexed, and when the request interface is idle, the sample fetch request is sent to a least recently used algorithm that manages the lower-level cache.

For example, referring in conjunction with fig. 5, the sampling logic 51 may be located in the instruction cache 42, and may send a sample fetch request to a replacement algorithm managing a lower level cache when the request interface between the instruction cache 42 and the second level cache 44 is idle.

And the sampling instruction-fetching request is sent in a multiplexing mode, so that the hardware resource can be saved. If the request interface is busy all the time, and the last sampling instruction-fetching request is not sent when the next sampling instruction-fetching request is generated, the sampling instruction-fetching request can be updated.

With reference to fig. 3 and fig. 6, in a specific implementation, after the step S33 sends the sample fetch request to the replacement algorithm managing the lower level cache, the method may further include the following steps:

step S61, determining whether the sample fetch request hits in the lower level cache, if yes, performing step S62, and if no, performing step S63;

step S62, returning the hit cache block to the upper-level cache;

step S63, continuing to request the pointed content of the sampling instruction request, where the continuing request may be a request to a next-level cache of the next-level cache;

step S64, storing the storage content hit in the continuation request process to the lower-level cache, and returning the storage content hit in the continuation request to the upper-level cache; when returning to the upper-level cache, the lower-level cache may return the storage content, which is a cache block of the lower-level cache, to the upper-level cache.

As described above, the upper-level cache and the lower-level cache are relative concepts. Upon a lower level cache miss, data may be requested from a lower level cache of the lower level cache. The next level cache is a lower level cache relative to the lower level cache that missed. For example, with reference to fig. 2, the upper-level cache may be a first-level cache 21, and the lower-level cache may be a second-level cache 22. The content of the sample instruction request requested in step S63 may be a request to continue to be stored in the third-level cache 23, and if the request hits in the third-level cache 23, the content in the hit cache block may be stored in the second-level cache 22. If each level of cache misses, a request may also be made to a memory at a level further below the cache, for example, a request to memory may be made, and if a hit occurs, the memory contents pointed to by the sample fetch request may be returned.

In addition, referring to fig. 4 in combination, in a specific implementation, the upper level cache may also be an instruction cache 42, and correspondingly, the lower level cache may be a second level cache 44. After a virtual instruction fetch hits in the second level cache 44, the hit cache block may be returned to the instruction cache 42. The step is set to enable the process in the embodiment of the application to have better compatibility with the existing processor.

In one embodiment, if the sample fetch request hits in the lower level cache, the lower level cache may not return the hit cache block. Similarly, if a miss occurs in the lower level cache, the request may not be continued. When the content pointed by the sampling instruction-fetching request is obtained in the continuing request, the content is not returned to the lower-level cache. After the content pointed by the sampling instruction-fetching request is obtained in the continuing request and returned to the lower-level cache, the lower-level cache may return or not return the corresponding cache block to the upper-level cache, and the lower-level cache may discard or store the content returned by the continuing request. The upper level cache may or may not discard the received cache block after receiving the returned cache block.

In a specific implementation, determining a sample fetch request according to the result of the sampling may include determining a source identifier in the sample fetch request, where the source identifier indicates that the sample fetch request is obtained according to the sampling. The source identifier may be set in a type (type) field in the fetch request, and a category may be added to attach the identifier.

When the lower-level cache returns the hit cache block, the source identifier may also be carried, and the upper-level cache may discard the content in the cache block returned by the sampling instruction fetching request according to the source identifier. Referring to fig. 4 in conjunction, when the upper level cache 42 is present, it may specifically be discarded by the instruction cache 42.

Because the sampling instruction-fetching request is generated after the instruction-fetching request is sampled, the content hit by the sampling instruction-fetching request is overlapped with the content hit by the sampled instruction-fetching request, the normal operation is not influenced by discarding the content returned by the sampling instruction-fetching request, and the resources are saved.

In particular implementations, it may be determined whether to perform at least one of the following processes based on the source indication: whether to continue to request the pointed content of the sampling instruction fetching request when the lower-level cache is missing; and whether to return the cache block that hit when the sample fetch request hits, or the memory contents that the sample fetch request missed in the next level cache but hit during the resume request. Specifically, when it is determined that one instruction fetch request is a sample instruction fetch request according to the source identifier, the contents pointed by the sample instruction fetch request may not be continuously requested from the next-level cache, or the hit cache block may not be returned to the upper-level cache after the request.

As described previously, the contents of the sampled fetch request hits coincide with the contents of the sampled fetch request hits. When one instruction fetching request is determined to be a sampling instruction fetching request according to the source identifier, the next level of cache requests are not continuously requested, normal operation is not influenced, and resources can be saved. If the request is continued, the original operation flow is changed less, and the logic is simpler.

Those of skill in the art will understand that the description herein of "in a particular implementation," "an embodiment," "for example," etc., means that a particular feature, structure, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. Furthermore, the particular features, structures, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this application can be combined and combined by those skilled in the art without contradiction.

Additionally, any process or method descriptions in flow charts or otherwise described herein in the foregoing embodiments may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process. And the scope of the preferred embodiments of the present application includes other implementations in which functions may be performed out of the order shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved.

In the embodiment of the application, a plurality of instruction fetching requests sent to a higher-level cache are sampled, the sampling instruction fetching requests are determined, and the sampling instruction fetching requests are sent to a lower-level cache, so that a replacement algorithm for managing the lower-level cache can take the instruction fetching requests which hit the higher-level cache into consideration when the kicked priority of the stored content in the lower-level cache is updated, and therefore the influence on the higher-level cache caused by kicking the stored content of the lower-level cache out due to the fact that the hit instructions of the higher-level cache are ignored for many times under the condition that the stored content in the higher-level cache is associated with the stored content in the lower-level cache is avoided.

An embodiment of the present application further provides a cache management apparatus, and with reference to fig. 7, the cache management apparatus may include:

a sampling unit 71 adapted to sample from a plurality of fetch requests sent to an upper level buffer;

a sampling instruction fetch request determining unit 72, adapted to determine a sampling instruction fetch request according to the result of the sampling, where the sampling instruction fetch request may include an instruction fetch address of the sampling instruction fetch request;

a cache management updating unit 73 adapted to send the sampling fetch request to a replacement algorithm managing a lower level cache to update a kicked-out priority of contents stored in the lower level cache;

wherein the upper level cache is read in preference to the lower level cache.

Referring to fig. 7 and 8 in combination, in a specific implementation, the sampling unit 71 in fig. 7 may include at least one of the following sampling sub-units:

a first sampling subunit 81 adapted to perform the sampling every first preset number of instruction fetch requests;

a second sampling subunit 82 adapted to perform said sampling every second preset number of clock cycles;

a third sampling subunit 83, adapted to record a third preset number of historical instruction fetching requests, and perform the sampling if the new instruction fetching request is the same as the recorded historical instruction fetching request.

With continued reference to fig. 7, in a specific implementation, the sampling instruction fetch request determining unit 72 is adapted to determine the result of the sampling, and determine the sampling instruction fetch request based on the result of the sampling meeting a preset condition, where the preset condition is used to indicate how frequently the sampled instruction fetch request is accessed.

In a specific implementation, the sampling instruction fetch request determining unit 72 is adapted to determine a result of the sampling, and determine the sampling instruction fetch request based on the result of the sampling meeting a preset condition, where the preset condition may include at least one of:

the sampling result finger-taking address is different from the fourth preset number of sampling result finger-taking addresses;

the sampling result is taken as the address hit instruction cache;

the sampled result is taken to indicate that the instruction after the request decoding hits a micro instruction cache;

the sampled result fetch request is in a micro instruction cache fetch mode.

In a specific implementation, the cache management updating unit 73 is adapted to send the sample fetch request to a replacement algorithm managing a lower-level cache through a dedicated interface of the replacement algorithm in the lower-level cache.

In a specific implementation, the cache management updating unit 73 is adapted to multiplex a request interface between an upper-level cache and a lower-level cache, and send the sample fetch request to a replacement algorithm for managing the lower-level cache when the request interface is idle.

In a specific implementation, with reference to fig. 7 and fig. 9 in combination, the cache management apparatus may further include: the inclusion relation determining unit 91 is adapted to determine that the upper-level cache and the lower-level cache are in an inclusion relation before sampling from a plurality of instruction fetching requests sent to the upper-level cache, where the inclusion relation indicates that all contents stored in the upper-level cache are included in the lower-level cache. For the specific implementation of other units in fig. 9, please refer to the description already made in fig. 7, which is not repeated herein.

In a specific implementation, with reference to fig. 10 and fig. 7, the cache management apparatus may further include:

and the miss processing unit 102 is suitable for continuously requesting the pointed content of the sampling instruction fetching request when the sampling instruction fetching request is missed in the lower-level cache, and storing the hit stored content of the continuous request to the lower-level cache.

The returning unit 101 is adapted to return a hit cache block to the upper-level cache when the sample fetch request hits in the lower-level cache, or return a hit memory content of the resume request to the upper-level cache when the lower-level cache misses.

For the specific implementation of other units in fig. 10, please refer to the description already made in fig. 7, which is not repeated herein.

With reference to fig. 10 and fig. 11 in a specific implementation, the cache management updating unit may include a source identifier unit 111, adapted to determine a source identifier in the sample fetch request, where the source identifier indicates that the sample fetch request is obtained according to the sample; further, the cache block returned by the returning unit 101 may be a content with a source identifier, that is, the missing processing unit 102 may return the cache block with the source identifier or store the content to the upper level cache; the cache management apparatus may further include: a discarding unit 112, adapted to discard the cache block or the storage content received by the upper level cache according to the source identifier. For the specific implementation of other units in fig. 11, please refer to the description already made in fig. 10, which is not repeated herein.

Referring to fig. 12 in a specific implementation, the cache management update unit may include a source identifier unit 111, adapted to determine a source identifier in the sample fetch request, where the source identifier indicates that the sample fetch request is obtained according to the sample; the cache management apparatus may further include a continued execution judging unit 121, adapted to determine whether to continue to request execution of at least one of the following according to the source identifier: when the sampling instruction fetching request is missing, continuing to request the pointed content of the sampling instruction fetching request; and when the sampling instruction-fetching request hits or hits in the continuous request process, returning the hit content to the upper-level cache. For the specific implementation of other units in fig. 12, please refer to the description already made in fig. 11, which is not repeated herein.

The memory management device and the memory management method in the embodiment of the present application correspond to each other, and the principles, noun explanations, beneficial effects, and specific implementation thereof may refer to the memory management method in the embodiment of the present application, which is not described herein again.

The units described in the circuit analysis device in electronic automation design and the information extraction device in electronic automation design in the embodiments of the present application may be wholly or partially implemented by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer programs. The procedures or functions according to the present application are generated in whole or in part when the computer program instructions are loaded and executed on a computer.

Furthermore, each of the functional modules may be integrated into one processing component, or each of the functional modules may exist alone physically, or two or more functional modules may be integrated into one component.

The cache management device in the embodiment of the application samples a plurality of instruction fetching requests sent to a higher-level cache, determines the sampling instruction fetching requests, and sends the sampling instruction fetching requests to a lower-level cache, so that a replacement algorithm for managing the lower-level cache can take the instruction fetching requests which hit the higher-level cache into consideration when the stored contents in the lower-level cache are kicked out of priority, and therefore the influence on the higher-level cache caused by kicking out the stored contents of the lower-level cache by the hitting instructions of the higher-level cache for many times is avoided under the condition that the stored contents in the higher-level cache are associated with the stored contents in the lower-level cache.

The embodiment of the present application further provides a processor, and the processor may include the foregoing cache management device.

Referring to fig. 5 in conjunction, the aforementioned cache management apparatus may be a specific embodiment of the sampling logic 51. That is, the sampling logic 51 in the figure may include a buffer management device. It will be appreciated that the sampling logic 51 may be located in the branch prediction unit 41, or in the instruction cache 42.

An embodiment of the present application further provides a computing device, where the computing device includes the processor described above.

As previously mentioned, the computing device is not limited to a computer system. The following description of a computing device is provided by way of example, and not limitation, of a computing device.

With reference to FIG. 13 in conjunction, as an alternative example of the disclosure of an embodiment of the present application, FIG. 13 illustrates a block diagram of a computer system architecture; it should be noted that the block diagram is shown for the convenience of understanding the disclosure of the embodiment of the present application, and the computer system in the embodiment of the present application is not limited to the architecture shown in fig. 13.

Referring to fig. 13, the computer system may include: a processor 131, a memory 132 coupled to the processor 131, and a south bridge 133 coupled to the processor.

The processor 131 may include a CISC (complex instruction set computer) microprocessor, a RISC (reduced instruction set computer) microprocessor, a VLIW (very long instruction word) microprocessor, a processor implementing a combination of instruction sets, or any other processor device, such as a digital signal processor.

Processor 131 may integrate at least one processor core 130 for executing at least one instruction, processor core 130 representing any type of architected processor core, such as a RISC processor core, a CISC processor core, a VLIM processor core, or a hybrid processor core, among others. Processor core 130 may be implemented in any suitable manner, in the case of processor 131 integrating multiple processor cores 130, the processor cores may be homogeneous or heterogeneous in architecture and/or instruction set; in an alternative implementation, some processor cores may be ordered and other processor cores may be unordered, and in another alternative implementation, two or more processor cores may execute the same instruction set and other processor cores may execute a subset of the instruction set or a different instruction set.

When the number of the processor cores 130 is plural, the plural processor cores 130 may each own their private cache, or may share the cache. In this specific implementation, each processor core may include the sampling logic in the embodiment of the present application. As an alternative example, the processor 131 may integrate the memory controller and the like, and provide the memory interface and the like to the outside; processor 131 may be coupled to memory 132 through a memory interface. Meanwhile, the processor 131 may be coupled to a processor bus and to the south bridge 133 through the processor bus.

As an alternative example, the south bridge 133 may integrate the bus interface 14 in communication with other components of the computer system, such that the processor 131 signals most of the other components of the computer system 1 through the south bridge 133; the components of the computer system can be added and adjusted according to actual conditions, and are not explained one by one here;

in an alternative example, the bus interface 134 into which the south bridge 133 is integrated includes, but is not limited to: a memory (such as a hard disk) bus interface, a USB bus interface, a network controller bus interface, a PCIE bus interface, etc.

It should be noted that the coupling structure of the processor and the south bridge in the exemplary block diagram of fig. 1 is basic, but the detailed refinement structure of the processor and the south bridge may be set, adjusted and/or expanded according to the specific use case, and is not fixed.

In other computer system architectures, such as those with separate south and north bridges, memory control may also be provided by the north bridge, such as the north bridge being primarily responsible for signal passing between the graphics card, memory, and processor, and coupling the processor up and the south bridge down; the south bridge is mainly responsible for signal transmission among the hard disk, the peripheral equipment, various IO (input/output) interfaces with lower bandwidth requirements, the memory and the processor.

The above is a computer architecture of a processor and south bridge type, and in other examples of the computer architecture, the computer architecture may also be implemented by SoC (System on Chip); for example, the SoC may integrate a processor, a memory controller, an IO interface, and the like, and the SoC may be coupled with other components such as an external memory, an IO device, and a network card, so as to build a computer architecture on a single main chip.

In addition, the processor described above is not limited to a Central Processing Unit (CPU), but may be an accelerator (e.g., a Graphics accelerator or a digital signal Processing Unit), a Graphics Processing Unit (GPU), a field programmable gate array (fpga), or any other processor having an instruction execution function. Although illustrated as a single processor, in practice, a computer architecture may have multiple processors, each with at least one processor core.

Yet another computing device may include sampling logic adapted to sample from a plurality of fetch requests sent to an upper level cache; determining a sampling instruction fetching request according to the sampling result, wherein the sampling instruction fetching request can comprise an instruction fetching address of the instruction fetching request obtained by sampling; sending the sampling instruction-fetching request to a replacement algorithm for managing a lower-level cache so as to update the kicked-out priority of the content stored in the lower-level cache; wherein the superior cache and the inferior cache are caches of the computing device, the superior cache being read in preference to the inferior cache.

In a specific implementation, the sampling logic may be configured to implement the memory management method described above, and for specific implementation and beneficial effects, reference is made to the foregoing description, which is not described herein again.

The hardware implementation of the computing device in the embodiment of the present application may refer to fig. 13 and the corresponding text description part, which are not described herein again.

In a specific implementation, reference may be made to fig. 5 and a corresponding text description part for a structure of a processor in a computing device, which are not described herein again.

The embodiment of the present application further provides another computing device, which may include a memory and a processor, where the memory stores a computer program executable on the processor, and the processor executes the cache management method according to the claims when executing the computer program.

The computer devices include, but are not limited to: the system comprises a server, a desktop computer, a smart phone, a notebook computer, a tablet computer, a smart bracelet, a smart watch, other smart devices or a distributed processing system formed by connecting any one or more devices in a communication way.

The embodiment of the present application further provides a computer-readable storage medium, on which a computer program is stored, where the computer program executes the information extraction method in the foregoing cache management method when running.

That is, the cache management method in the above-described embodiments of the present application may be implemented as software or computer code that can be stored in a recording medium, or computer code that is originally stored in a remote recording medium or a non-transitory machine-readable medium and is to be stored in a local recording medium downloaded through a network, so that the method described herein can be processed by such software stored on a recording medium using a general-purpose computer, a dedicated processor, or programmable or dedicated hardware. It will be appreciated that a computer, processor, microprocessor controller, or programmable hardware includes memory components (e.g., RAM, ROM, flash memory, etc.) that can store or receive software or computer code that, when accessed and executed by a computer, processor, or hardware, implements the cache management methods described herein.

Compared with the prior art, in the embodiment of the application, the sampling is carried out on the plurality of instruction fetching requests sent to the upper-level cache, the sampling instruction fetching requests are determined, and the sampling instruction fetching requests are sent to the lower-level cache, so that the instruction fetching requests hitting the upper-level cache can be considered when the kicking priority of the stored content in the lower-level cache is updated by the replacement algorithm for managing the lower-level cache, and the influence on the upper-level cache caused by the fact that the stored content kicking the stored content of the lower-level cache is ignored for many times under the condition that the stored content in the upper-level cache is associated with the stored content in the lower-level cache is avoided.

Although the embodiments of the present application are disclosed above, the present application is not limited thereto. Various changes and modifications may be effected by one skilled in the art without departing from the spirit and scope of the embodiments of the application, and it is intended that the scope of the application be limited only by the claims appended hereto.

27页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:消磁装置、消磁方法、电子设备和介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类