cache mapping architecture dynamic adjustment method and cache controller

文档序号:378691 发布日期:2021-12-10 浏览:25次 中文

阅读说明:本技术 cache映射架构动态调整方法及cache控制器 (cache mapping architecture dynamic adjustment method and cache controller ) 是由 卢知伯 黎健 何凯帆 梁明亮 于 2021-08-25 设计创作,主要内容包括:本发明涉及芯片技术领域,公开一种cache映射架构动态调整方法及cache控制器。cache映射架构动态调整方法包括:配置cache架构参数,cache架构参数用于动态配置cache控制器,cache控制器包括用于存储内存数据的cache存储模组;发送cache架构参数至cache控制器,以使cache控制器根据cache架构参数,动态调整cache存储模组对内存的映射架构。因此,本实施例能够相对地节约芯片设计面积,并且能够根据不同应用场景动态调整不同映射架构,从而满足不同应用需求。(The invention relates to the technical field of chips and discloses a cache mapping architecture dynamic adjustment method and a cache controller. The dynamic adjustment method of the cache mapping architecture comprises the following steps: configuring cache architecture parameters, wherein the cache architecture parameters are used for dynamically configuring a cache controller, and the cache controller comprises a cache storage module used for storing memory data; and sending the cache architecture parameters to a cache controller so that the cache controller dynamically adjusts the mapping architecture of the cache storage module to the memory according to the cache architecture parameters. Therefore, the embodiment can relatively save the chip design area, and can dynamically adjust different mapping architectures according to different application scenes, thereby meeting different application requirements.)

1. A cache mapping architecture dynamic adjustment method is characterized by comprising the following steps:

configuring cache architecture parameters, wherein the cache architecture parameters are used for dynamically configuring a cache controller, and the cache controller comprises a cache storage module used for storing memory data;

and sending the cache architecture parameters to the cache controller so that the cache controller dynamically adjusts the mapping architecture of the cache storage module to the internal memory according to the cache architecture parameters.

2. The method according to claim 1, wherein the cache storage module comprises at least one cache set consisting of at least one cache line, the cache architectural parameters comprise a number of cache lines per cache set, and the configuring the cache architectural parameters comprises:

and configuring the cache line number of each cache group according to the number of main devices simultaneously accessing the memory.

3. The method of claim 2, wherein configuring the number of cache lines per cache set according to the number of masters simultaneously accessing the memory comprises:

and if the number of the main devices is larger than the number of the existing lines, increasing the number of the cache lines of each cache group.

4. The method according to claim 1, wherein the cache storage module comprises at least one cache set composed of at least one cache line, the cache architecture parameters comprise a cache length of each cache line, and the configuring the cache architecture parameters comprises:

and configuring the cache length of each cache line according to the continuity of the main equipment accessing the memory.

5. The method of claim 4, wherein the configuring the cache length of each cache line according to the continuity of the main device accessing the memory comprises:

if the continuous times of the main equipment accessing the memory are larger than or equal to a first preset time threshold value, increasing the cache length of each cache line;

and if the continuous times of the main equipment accessing the memory are less than or equal to a second preset time threshold value, reducing or keeping the cache length of each cache line, wherein the second preset time threshold value is less than or equal to the first preset time threshold value.

6. The method according to claim 1, wherein the cache storage module comprises at least one cache set composed of at least one cache line, the cache architectural parameters comprise a set number of the cache set, and the configuring the cache architectural parameters comprises:

and configuring the number of the groups according to the size of the storage space of the cache storage module for mapping the memory.

7. The method according to claim 6, wherein the configuring the number of groups according to the size of the storage space of the memory mapped by the cache storage module comprises:

if the storage space of the cache storage module for mapping the memory is larger than a preset storage threshold value, increasing the number of groups;

and if the storage space of the cache storage module for mapping the memory is smaller than a preset storage threshold value, reducing or keeping the number of the groups.

8. The method according to any one of claims 1 to 7,

the configuring cache architecture parameters comprises: configuring a plurality of groups of cache architecture parameters;

correspondingly, the method further comprises the following steps:

sequentially acquiring the operation effect of operating the software program under the mapping architecture corresponding to each group of cache architecture parameters;

determining optimal architecture parameters according to the operation effects;

and sending the optimal architecture parameters to the cache controller so that the cache controller dynamically adjusts the mapping architecture of the cache storage module to the memory according to the optimal architecture parameters.

9. The method of claim 8, wherein the operational effect comprises a runtime for running the software program, and wherein determining optimal architecture parameters according to the respective operational effect comprises:

searching for a minimum run time among the respective run times;

and determining the cache architecture parameter corresponding to the minimum running time as an optimal architecture parameter.

10. A storage medium having stored thereon computer-executable instructions for causing a processor to perform the method for dynamically adjusting a cache mapping architecture of any of claims 1 to 9.

11. A chip, comprising:

a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,

the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the cache mapping architecture dynamic adjustment method of any of claims 1 to 9.

12. An electronic device, comprising:

at least one processor; and the number of the first and second groups,

a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,

the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the cache mapping architecture dynamic adjustment method of any of claims 1 to 9.

13. A cache controller, comprising:

the cache storage module comprises a plurality of cache lines, and each cache line is used for storing memory data, tag data and effective bit data;

the cache architecture parameter can be dynamically configured in the programming register group so as to dynamically adjust the mapping architecture of the cache storage module to the internal memory;

the cache storage module is used for storing tag data, effective bit data and cache architecture parameters, and is used for controlling the cache storage module to interact with main equipment to store data;

the cache line loading module is used for accessing the memory according to the loading command;

and the cache line updating module is used for updating the cache line data corresponding to the cache architecture parameters in the cache storage module under the control of the cache line loading module.

Technical Field

The invention relates to the technical field of chips, in particular to a cache mapping architecture dynamic adjustment method and a cache controller.

Background

The Cache memory (Cache) technology is one of core technologies in modern processor design, and effectively solves the matching problem between the processing speed and the memory speed, wherein the Cache mapping architecture comprises a fully associative mapping architecture, a direct mapping architecture and a set associative mapping architecture, and generally, the Cache architecture is fixed, that is, the Cache line number (way), the set number (set) and the Cache length of a Cache controller are fixed.

Because the present SOC application scenes are more and more, the requirements on the cache characteristics in different application scenes are different, and because the cache mapping architecture is fixed, the cache efficiency is not uniform when different application scenes are dealt with, even the cache fails and the access speed is slowed down, and if the caches with different architectures are designed according to different application scenes, time and labor are wasted, the cost is increased, and the chip area is easily increased or wasted.

Disclosure of Invention

An object of the embodiments of the present invention is to provide a dynamic adjustment method for a cache mapping architecture and a cache controller, which are used to solve technical defects in the prior art.

In a first aspect, an embodiment of the present invention provides a method for dynamically adjusting a cache mapping architecture, including:

configuring cache architecture parameters, wherein the cache architecture parameters are used for dynamically configuring a cache controller, and the cache controller comprises a cache storage module used for storing memory data;

and sending the cache architecture parameters to the cache controller so that the cache controller dynamically adjusts the mapping architecture of the cache storage module to the memory according to the cache architecture parameters.

Optionally, the cache storage module includes at least one cache group composed of at least one cache line, the cache architecture parameter includes the number of cache lines of each cache group, and the configuring the cache architecture parameter includes:

and configuring the cache line number of each cache group according to the number of main devices simultaneously accessing the memory.

Optionally, the configuring, according to the number of master devices accessing the memory at the same time, the number of cache lines of each cache group includes:

and if the number of the main devices is larger than the number of the existing lines, increasing the number of the cache lines of each cache group.

Optionally, the cache storage module includes at least one cache group composed of at least one cache line, the cache architecture parameter includes a cache length of each cache line, and the configuring the cache architecture parameter includes:

and configuring the cache length of each cache line according to the continuity of the main equipment accessing the memory.

Optionally, the configuring, according to the continuity of the main device accessing the memory, the cache length of each cache line includes:

if the continuous times of the main equipment accessing the memory are larger than or equal to a first preset time threshold value, increasing the cache length of each cache line;

and if the continuous times of the main equipment accessing the memory are less than or equal to a second preset time threshold value, reducing or keeping the cache length of each cache line, wherein the second preset time threshold value is less than or equal to the first preset time threshold value.

Optionally, the cache storage module includes at least one cache group composed of at least one cache line, the cache architecture parameter includes a group number of the cache group, and the configuring the cache architecture parameter includes:

and configuring the number of the groups according to the size of the storage space of the cache storage module for mapping the memory.

Optionally, the configuring, according to the size of the storage space of the memory mapped by the cache storage module, the number of groups includes:

if the storage space of the cache storage module for mapping the memory is larger than a preset storage threshold value, increasing the number of groups;

and if the storage space of the cache storage module for mapping the memory is smaller than a preset storage threshold value, reducing or keeping the number of the groups.

Optionally, the configuring cache architecture parameters includes: configuring a plurality of groups of cache architecture parameters;

correspondingly, the method further comprises the following steps:

sequentially acquiring the operation effect of operating the software program under the mapping architecture corresponding to each group of cache architecture parameters;

determining optimal architecture parameters according to the operation effects;

and sending the optimal architecture parameters to the cache controller so that the cache controller dynamically adjusts the mapping architecture of the cache storage module to the memory according to the optimal architecture parameters.

Optionally, the operation effect includes an operation time for operating the software program, and the determining an optimal architecture parameter according to each operation effect includes:

searching for a minimum run time among the respective run times;

and determining the cache architecture parameter corresponding to the minimum running time as an optimal architecture parameter.

In a second aspect, an embodiment of the present invention provides a storage medium storing computer-executable instructions for causing a processor to execute the cache mapping architecture dynamic adjustment method according to any one of claims 1 to 9.

In a third aspect, an embodiment of the present invention provides a chip, including:

a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,

the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the cache mapping architecture dynamic adjustment method described above.

In a fourth aspect, an embodiment of the present invention provides an electronic device, including:

at least one processor; and the number of the first and second groups,

a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,

the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the cache mapping architecture dynamic adjustment method described above.

In a fifth aspect, an embodiment of the present invention provides a cache controller, including:

the cache storage module comprises a plurality of cache lines, and each cache line is used for storing memory data, tag data and effective bit data;

the cache architecture parameter can be dynamically configured in the programming register group so as to dynamically adjust the mapping architecture of the cache storage module on the internal memory;

the cache storage module is used for storing tag data, effective bit data and cache architecture parameters, and is used for controlling the cache storage module to interact with main equipment to store data;

the cache line loading module is used for accessing the memory according to the loading command;

and the cache line updating module is used for updating the cache line data corresponding to the cache architecture parameters in the cache storage module under the control of the cache line loading module.

In the dynamic adjustment method for the cache mapping architecture provided by the embodiment of the invention, cache architecture parameters are configured, the cache architecture parameters are used for dynamically configuring a cache controller, the cache controller comprises a cache storage module used for storing memory data, and the cache architecture parameters are sent to the cache controller, so that the cache controller dynamically adjusts the mapping architecture of the cache storage module to the memory according to the cache architecture parameters.

Drawings

One or more embodiments are illustrated by way of example in the accompanying drawings, which correspond to the figures in which like reference numerals refer to similar elements and which are not to scale unless otherwise specified.

FIG. 1 is a schematic structural diagram of a cache system according to an embodiment of the present invention;

FIG. 2 is a schematic structural diagram of a cache system according to another embodiment of the present invention;

fig. 3 is a schematic flow chart of a dynamic adjustment method for a cache mapping architecture according to an embodiment of the present invention;

fig. 4a to fig. 4c are schematic diagrams of mapping architectures under different cache architecture parameters according to an embodiment of the present invention;

FIG. 5a is a diagram illustrating a mapping architecture for increasing cache length according to an embodiment of the present invention;

FIG. 5b is a diagram illustrating a mapping architecture with reduced cache size according to an embodiment of the present invention;

FIG. 6a is a schematic diagram of a mapping architecture with increased number of cache sets according to an embodiment of the present invention;

FIG. 6b is a schematic diagram of a mapping architecture under a change of the cache length L according to an embodiment of the present invention;

FIG. 7a is a schematic flow chart illustrating a dynamic adjustment method for a cache mapping architecture according to another embodiment of the present invention;

FIG. 7b is a schematic flow chart of S34 shown in FIG. 7 a;

fig. 8 is a schematic circuit structure diagram of an electronic device according to an embodiment of the present invention.

Detailed Description

In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without any inventive step, are within the scope of the present invention.

It should be noted that, if not conflicted, the various features of the embodiments of the invention may be combined with each other within the scope of protection of the invention. Additionally, while functional block divisions are performed in device schematics, with logical sequences shown in flowcharts, in some cases, the steps shown or described may be performed in a different order than the block divisions in the devices, or in the flowcharts. The terms "first", "second", "third", and the like used in the present invention do not limit data and execution order, but merely distinguish the same items or similar items having substantially the same function and action.

Referring to fig. 1, the cache system 100 includes a main device 11, a cache controller 12, and a memory 13, where the cache controller 12 is electrically connected to the main device 11 and the memory 13, respectively.

The host device 11 executes the software program and needs to fetch memory data from the memory 13. When the main device 11 accesses the memory 13, the access to the cache controller 12 is transferred, if a storage address of a corresponding cache line in the cache controller 12 is consistent with an access address of the main device 11 to the memory 13, the cache controller 12 hits, the main device 11 can directly fetch memory data from the cache line, and if the storage address is not consistent with the access address of the main device 11 to the memory 13, the cache controller 12 does not hit, so that the cache controller 12 sends an access request to the memory 13, and loads memory data with the same size as the length of the cache line from the memory 13 into the cache controller 12, so that the main device 11 can fetch the memory data from the cache controller 12.

In some embodiments, the host device 11 may be any suitable type of device, such as an electronic device such as a headset or camera module. It is understood that, referring to fig. 2, the number of the master devices 11 may be multiple, and multiple master devices 11 may access the memory 13 simultaneously.

In some embodiments, with continued reference to fig. 1, the cache controller 12 includes a cache storage module 121, a programmed register set 122, a hit determination module 123, a cache line loading module 124, and a cache line updating module 125.

The cache storage module 121 includes a plurality of cache lines (cache lines), where each cache line is used to store memory data, tag (tag) data and valid bit (valid) data, the memory data is data mapped by the cache, the tag data includes an area code address of the memory data at a corresponding location in the memory, the valid bit data is used to indicate whether the memory data of the cache line is valid, generally, when the valid bit data is 1, the memory data is invalid, and when the valid bit data is 0, the memory data is valid. Generally, when memory data is not present in memory, the memory data may be considered invalid even if the memory data is stored in a cache line. Similarly, when the memory data is in the memory, the memory data is stored in the cache line, and the memory data can be considered to be valid.

The cache storage module 121 includes M cache sets (sets), each cache set includes N ways (way) of cache lines, and the cache length of each way of cache lines is L bytes.

In some embodiments, the cache storage module 121 is a register set or a RAM memory.

The programming register group 122 is configured to store cache architecture parameters, and the cache architecture parameters may be dynamically configured in the programming register group 122 to dynamically adjust a mapping architecture of the cache storage module 121 for the memory 13, where the cache architecture parameters include a cache group number M, a cache line number N, and a cache length L, and the mapping architecture may be a fully associative mapping architecture, a direct mapping architecture, or a set associative mapping architecture.

The hit determining module 123 is configured to determine, according to the tag data, the valid bit data, and the cache architecture parameter, whether a cache line in the cache storage module is hit under the dynamically adjusted mapping architecture, if yes, control the programming register group 122 to interact with the main device 11 to store the memory data, and if not, generate a load command. For example, the hit determining module 123 obtains an external access request of the master device 11, where the external access request carries a memory address, and the memory address is the memory area number address + the memory block number address + the internal address. The hit determining module 123 decodes the memory address according to the cache architecture parameter { M, N, L }, determines whether the decoded address is consistent with the area code address of the tag data, and if so, and the valid bit data is 0, the cache hits, and the hit determining module 123 controls the cache storage module 121 and the host device 11 to interact with the memory data, for example, to read the memory data.

If the data length is not consistent or the valid bit data is 1, the cache line loading module 124 is configured to access the memory 13 according to the loading command, for example, the cache line loading module 124 accesses the memory 13 by the cache length L, and loads the memory data with the data length L from the memory 13.

The cache line updating module 125 is configured to perform data updating on the cache line corresponding to the cache architecture parameter in the cache storage module 121 under the control of the cache line loading module 124, for example, the cache line updating module 125 loads the memory data to be loaded into the corresponding cache line according to the number M of cache groups and the number N of cache lines in the cache architecture parameter, in combination with the memory address of the memory data to be loaded, so as to complete the data loading operation.

It is understood that the hit determining module 123, the cache line loading module 124 and the cache line updating module 125 may be a chip design circuit with logic operation function and storage function, which is composed of integrated circuit devices.

It can be understood that the cache architecture parameters can be repeatedly programmed and configured to meet different application scene requirements.

As another aspect of the embodiments of the present invention, an embodiment of the present invention provides a method for dynamically adjusting a cache mapping architecture. Referring to fig. 3, the method S300 for dynamically adjusting the cache mapping architecture includes:

s31, configuring cache architecture parameters, wherein the cache architecture parameters are used for dynamically configuring a cache controller, and the cache controller comprises a cache storage module used for storing memory data;

as an example and not by way of limitation, the cache architecture parameter is used to indicate a mapping architecture of the cache storage module to the memory, where the mapping architecture is an address mapping relationship in which memory data of the memory is mapped to a corresponding cache line in the cache storage module, and as described above, the mapping architecture includes a fully associative mapping architecture, a direct mapping architecture, or a set associative mapping architecture.

In some embodiments, the cache architecture parameters include a cache group number M, a cache line number N, and a cache length L, where the cache group number M, the cache line number N, and the cache length L may be natural numbers greater than or equal to 1, the cache group number is the number of cache groups in the cache storage module, the cache line number is the number of cache lines in each cache group, and the cache length is the total number of bytes or bits of memory data of the cache line, where a unit of the cache length L may be a byte or a bit.

In some embodiments, the host device has configured cache architecture parameters, and a CPU of the host device communicates with the cache controller to write the cache architecture parameters into the programming register group 122, so that subsequent other modules call the cache architecture parameters of the programming register group 122 to perform read or write operations on the cache storage module.

And S32, sending the cache architecture parameters to the cache controller, so that the cache controller dynamically adjusts the mapping architecture of the cache storage module to the internal memory according to the cache architecture parameters.

For example, the mapping architecture is a set associative mapping architecture, when executing the first software program, the master device 11 sends the cache architecture parameter { M ═ 4, N ═ 3, L ═ 128} to the cache storage module 121, please refer to fig. 4a, and the cache storage module 121 forms an address mapping relationship with the memory 13 according to the cache architecture parameter { M ═ 4, N ═ 3, L ═ 128 }.

When the second software program is executed, the cache architecture parameter { M ═ 3, N ═ 3, and L ═ 128} of the host device 11 is sent to the cache storage module 121, so as to dynamically adjust the mapping architecture of the cache storage module 121 to the memory 13, please refer to fig. 4b, and the cache storage module 121 forms an address mapping relationship with the memory 13 according to the cache architecture parameter { M ═ 3, N ═ 3, and L ═ 128 }.

When executing the third software program, the host device 11 sends the cache architecture parameter { M ═ 3, N ═ 4, and L ═ 128} to the cache storage module 121, please refer to fig. 4c, and the cache storage module 121 forms an address mapping relationship with the memory 13 according to the cache architecture parameter { M ═ 3, N ═ 4, and L ═ 128 }.

It is understood that M may be 1 or other natural number greater than 1, wherein when M is 1, the set associative mapping architecture may become a fully associative mapping architecture.

It is understood that N may be 1 or other natural number greater than 1, wherein when N ═ 1, the set associative mapping architecture may become the direct mapping architecture.

It will also be appreciated that the cache mapping architecture dynamic adjustment methods provided herein may be applicable to fully associative mapping architectures, direct mapping architectures, or set associative mapping architectures.

Because the number of cache groups M, the number of cache lines N, and the cache length L can be configured, and a cache controller with cache architecture parameters { M, N, L } needs to consume at least the area size of M × N × L, the method provided by this embodiment can dynamically adjust different mapping architectures according to different application scenarios under the same chip area, so as to meet different application requirements, for example, for a cache storage module with a standard of M × 1, N × 4, and L × 256 bytes, this embodiment can dynamically change to an architecture with M × 1, N × 2, and L × 256 bytes, or dynamically change to an architecture with M × 1, N × 8, and L × 128 bytes, or to an architecture with M × 1, N × 8, and L × 64 bytes.

In some embodiments, when the current program phase requires multiple masters to access the memory simultaneously, a shock failure phenomenon occurs when the number of masters is greater than the number of cache lines of each cache set, for example, referring to fig. 4a, the cache mapping architecture adopts a 3-way, 4-set 128-byte structure, that is, M is 4, N is 3, and L is 128 bytes. When 4 masters access the memory at the same time, the 4 masters are transferred to the cache controller for access, and since the number of the masters is 4 and greater than the number of the cache lines M, which is 3, cache miss may occur in the current program stage at a high probability.

As shown in FIG. 4a, the master P0 accesses block 0 of area 0 of the memory 13, the master P1 accesses block 0 of area 1 of the memory 13, the master P2 accesses block 0 of area 2 of the memory 13, and the master P3 accesses block 0 of area 3 of the memory 13. According to the set associative mapping architecture, the 0 th block of the 0 th region, the 0 th block of the 1 st region, the 0 th block of the 2 nd region and the 0 th block of the 3 rd region can be mapped to any cache line of the 0 th group, the 1 st cache line and the 2 nd cache line of the cache storage module 121.

Suppose that the 0 th block in the 0 th area of the memory 13 is mapped to the 0 th cache line in the 0 th group of the cache storage module, the 0 th block in the 1 st area of the memory 13 is mapped to the 1 st cache line in the 0 th group of the cache storage module, and the 0 th block in the 2 nd area of the memory 13 is mapped to the 2 nd cache line in the 0 th group of the cache storage module, because the host P3 needs to access the 0 th block in the 3 rd area of the memory 13, but the cache misses the 0 th block in the 3 rd area of the memory 13, the cache controller needs to select one block from the 0 th block in the 0 th area to the 0 th block in the 2 nd area of the memory 13 to be flushed (suppose that the 0 th block in the 2 nd area is flushed), and then load the 0 th block in the 3 rd area of the memory 13 from the memory 13.

When the primary device P2 needs to repeatedly access the 0 th block in the 2 nd area of the memory 13, the cache controller needs to select one block from the 0 th block in the 0 th area to the 0 th block in the 2 nd area of the memory 13 to flush the selected block, and then load the 0 nd block in the 2 nd area of the memory 13 from the memory 13. In this way, the access efficiency is low because the time is required to load the cache line, but the data loaded to the cache line is not accessed and is continuously flushed.

In some embodiments, the cache storage module includes at least one cache set composed of at least one cache line, the cache architectural parameter includes a number of cache lines of each cache set, and when the cache architectural parameter is configured, S31 includes: configuring the number of cache lines of each cache group according to the number of main devices simultaneously accessing the memory, wherein the number of the main devices is the number of the devices simultaneously accessing the memory,

according to the embodiment, the number of the cache lines of each cache group can be flexibly configured according to the number of the main devices, the oscillation failure probability is reduced, and the data access efficiency is improved.

In some embodiments, if the number of the master devices is greater than the number of the existing lines, the number of the cache lines of each cache group is increased, where the existing lines are the current lines of the cache lines corresponding to each cache group in the cache storage module.

Referring to fig. 4c, since each cache set has 4 cache lines, when 4 masters access the memory simultaneously, the 4 masters transfer to the cache controller for access, for example, master P0 accesses the 0 th block in the 0 th area of the memory 13, master P1 accesses the 0 th block in the 1 st area of the memory 13, master P2 accesses the 0 th block in the 2 nd area of the memory 13, and master P3 accesses the 0 th block in the 3 rd area of the memory 13. According to the set associative mapping architecture, the 0 th block of the 0 th region, the 0 th block of the 1 st region, the 0 th block of the 2 nd region and the 0 th block of the 3 rd region can be mapped to any cache line of the 0 th cache line, the 1 st cache line, the 2 nd cache line and the 3 rd cache line in the 0 th group of the cache memory module 121.

Assuming that the 0 th block in the 0 th area of the memory 13 is mapped to the 0 th cache line in the 0 th group of the cache storage module, the 0 th block in the 1 st area of the memory 13 is mapped to the 1 st cache line in the 0 th group of the cache storage module, the 0 th block in the 2 nd area of the memory 13 is mapped to the 2 nd cache line in the 0 th group of the cache storage module, the 0 th block in the 3 rd area of the memory 13 is mapped to the 3 rd cache line in the 0 th group of the cache storage module, and the cache hits the cache line required by the main device P0 to the main device P3, because the memory data required by other main devices in subsequent steps does not need to be frequently flushed in the corresponding cache line, the data access efficiency can be improved by the method.

In some embodiments, the number of cache lines per cache set is reduced or maintained if the number of masters is less than or equal to the number of existing lines.

In some embodiments, the cache length L affects the time for the cache controller to load the memory data from the memory, and in the current program phase, when the continuity of the main device accessing the memory is weak, if the cache length L is too large, the cache controller spends too much load time to load the unrelated data. Or, when the continuity of the main device accessing the memory is relatively strong, if the cache length L is too small, the main device needs to spend too much time accessing the memory.

For example, taking fig. 4a as an example, the master device only needs to read 64 bytes of memory data, and the 64 bytes of memory data required by the master device are stored in the 2 nd block of the 1 st area of the memory. If the mapping architecture shown in fig. 4a is adopted, since the cache length is 128 bytes, and the cache controller loads the memory data from the memory according to the cache length of a whole cache line, when the main device accesses the memory data of the 1 st block of the memory, the cache controller needs to spend relatively much loading time, and loads the memory data of the 2 nd block of the 1 st block from the memory to the corresponding cache line in the 2 nd group in the cache storage module.

For another example, please continue to combine with fig. 4a, the main device needs to continuously access the memory data of the 1 st block 2 and the 1 st block 3 of the memory, but since the cache length is 128 bytes and the cache controller loads the memory data from the memory according to the cache length of a whole cache line, the cache controller needs to load the memory data of the 1 st block 2 of the 1 st block from the memory to the corresponding cache line in the 2 nd group of the cache storage module, and then loads the memory data of the 1 st block 3 of the 1 st block from the memory to the corresponding cache line in the 3 rd group of the cache storage module, and the efficiency of accessing the data is relatively low.

In some embodiments, the cache storage module includes at least one cache set composed of at least one cache line, the cache architectural parameter includes a cache length of each cache line, and when the cache architectural parameter is configured, S31 includes: and configuring the cache length of each cache line according to the continuity of the main equipment for accessing the memory.

According to the embodiment, the cache length of each cache line can be flexibly configured according to the continuity of the main device accessing the memory, on one hand, the cache length is adjusted in a targeted manner according to the data requirement, so that the efficiency of loading data is favorably improved, and on the other hand, the data efficiency of the main device accessing the memory is favorably improved.

In some embodiments, if the number of consecutive times that the primary device accesses the memory is greater than or equal to the first preset number threshold, the cache length of each cache line is increased. If the number of consecutive times of the address of the main device accessing the memory is less than or equal to a second preset number threshold, reducing or maintaining the cache length of each cache line, where the second preset number threshold is less than or equal to a first preset number threshold, where the first preset number threshold or the second preset number threshold is customized by a designer, for example, the first preset number threshold is 2, the second preset number threshold is 2, or the first preset number threshold is 4, and the second preset number threshold is 3.

For example, continuing to refer to fig. 4a, assuming that the first predetermined threshold is 2, the master needs to continuously access the memory data of the 2 nd block and the 3 rd block in the 1 st region of the memory, at this time, the master increases the original cache length L1-128 to the current cache length L2-256, so as to obtain the new mapping architecture shown in fig. 5 a.

In the mapping scheme shown in fig. 5a, the number of cache sets is 2, and the number of cache lines in each set is 3, i.e., in fig. 5a, M is 2, N is 3, and L is 256. The memory is divided into 8 areas in total, and each area has 2 memory blocks, wherein the memory data of the 1 st block in the 1 st area of the memory shown in fig. 5a is the memory data of the 2 nd block and the 3 rd block in the 1 st area of the memory shown in fig. 4 a.

The memory data in the 1 st block of the 1 st region of the memory shown in fig. 5a is mapped to the corresponding cache line in the 1 st set in the cache memory module shown in fig. 5 a. Subsequently, the main device directly takes the memory data from the corresponding cache line in the 1 st group of the cache storage module at one time, and the memory data of the 2 nd block and the 3 rd block in the 1 st region of the memory do not need to be continuously accessed in the mapping architecture shown in fig. 4a, which is beneficial to improving the data efficiency of the main device for accessing the memory.

It is understood that the number of cache sets and the number of cache lines in the mapping architecture shown in fig. 5a may be the same as or different from the number of cache sets and the number of cache lines in the mapping architecture shown in fig. 4a, and may be designed by a designer.

For another example, referring to fig. 4a again, assuming that the second predetermined number threshold is 2, the master device only needs to read 64 bytes of memory data, and the 64 bytes of memory data required by the master device are stored in the corresponding block address of the 2 nd block of the 1 st area of the memory. At this time, the master device reduces the original buffer length L3-128 to the current buffer length L4-64, resulting in the new mapping architecture shown in fig. 5 b.

In the mapping scheme shown in fig. 5b, the number of cache sets is 4, and the number of cache lines in each set is 6, i.e., in fig. 5b, M is 4, N is 6, and L is 64. The memory is divided into 8 areas, each area has 4 memory blocks, and it is assumed that the memory data of the 1 st block in the 3 rd area of the memory shown in fig. 5b is the memory data of the 2 nd block in the 1 st area of the memory shown in fig. 4a (64 bytes of memory data required by the host device).

In FIG. 5b, the memory data of the 1 st block in the 3 rd region of the memory is mapped to the corresponding cache line in the 1 st set of the cache storage module. Subsequently, the main device directly takes the memory data from the corresponding cache line in the 1 st group of the cache storage module at one time, and does not need to load 128 bytes of memory data to the corresponding cache line in the 2 nd group of the cache storage module shown in fig. 4a in the mapping architecture shown in fig. 4a, thereby improving the data access efficiency.

In some embodiments, when the storage space of the cache storage module for mapping the memory is larger, the smaller the number of cache groups of the cache storage module is, the more the memory addresses mapped by each cache line of the cache storage module are, so that the cache addressing burden is increased, and the chip area design is increased.

For example, as shown in fig. 4a, the cache length L is 128B, the number of cache sets M is 4, the number of cache lines N is 3, and the cache size is 128 x 4 x 3 1536B.

Assuming that the memory size is 4 × 128 — 2048B, the memory is divided into 4 sectors, each sector having 4 memory blocks. When the cache storage module and the memory adopt a set-associative mapping architecture, each cache line may correspond to 4 memory blocks, as described above, the memory address is memory area number address + memory block number address + internal address, and in the address translation table of the cache storage module, the address translation table stores table units that are the same as the total line number of the cache lines, and each table unit stores the memory area number address stored in the corresponding cache line. In the hit determining process, the hit determining module first fetches the memory block number address in the memory address, and jumps to the cache set of the corresponding sequence number in the address translation table according to the memory block number address, for example, in fig. 4a, if the memory address (11 bits) is the memory area number address (2 bits) + the memory block number address (2 bits) + the internal address (7 bits), and if the memory block number address is 01, the hit determining module jumps to the 1 st cache set in the address translation table according to the memory block number address 01. And then judging whether table units consistent with the memory area code address exist in the 1 st cache group one by one, if so, hitting, and then, accessing memory data according to the cache address. As can be seen from the above discussion, there are 4 results for the memory region number address corresponding to each cache line.

Assuming that the memory size is doubled, 2048 × 2 — 4096B, and if the mapping is still performed according to the cache architecture parameters { M ═ 4, N ═ 3, and L ═ 128}, then: at this time, the memory is divided into 8 areas, each area has 4 memory blocks, at this time, the memory address (12 bits) is the memory area number address (3 bits) + the memory block number address (2 bits) + the internal address (7 bits), and the memory area number address corresponding to each cache line has 8 results.

Therefore, when the size of the memory is increased to make the storage space of the cache storage module for mapping the memory larger, the addressing pressure of the cache can be increased without adjusting the mapping architecture.

In some embodiments, the cache storage module includes at least one cache set composed of at least one cache line, the cache architecture parameter includes a set number of the cache set, and when the cache architecture parameter is configured, S31 includes: and configuring the number of the groups according to the size of the storage space of the cache storage module mapping memory.

According to the embodiment, the number of the groups can be flexibly configured according to the size of the storage space of the cache storage module mapping memory, and the cache addressing pressure can be reduced.

In some embodiments, if the storage space of the cache storage module mapping memory is larger than a preset storage threshold, the number of groups is increased; and if the storage space of the cache storage module for mapping the memory is smaller than a preset storage threshold value, reducing or keeping the number of the groups.

For example, in the embodiment, if the number of cache sets of the cache storage module shown in fig. 4a is changed from M-4 to M-8, the mapping structure between the cache storage module and the memory needs to be changed, and the changed result is shown in fig. 6 a.

Even if the memory size is doubled, 2048 × 2 — 4096B, but the cache architecture parameters are { M ═ 8, N ═ 3, and L ═ 128}, in fig. 6a, the memory is divided into 4 regions, each region has 8 memory blocks, and the memory address (12 bits) } memory region number address (2 bits) + memory block number address (3 bits) + internal address (7 bits). Because there are 4 kinds of results in the memory area code address corresponding to each cache line, by adopting the method, under the condition that the memory is increased to enlarge the storage space of the cache storage module for mapping the memory, the cache addressing pressure can not be increased, and relatively, the embodiment can reduce the cache addressing pressure and improve the data access efficiency.

In summary, by the cache mapping architecture dynamic adjustment method described in the above embodiments, on one hand, the data access efficiency is improved beneficially by dynamically configuring cache architecture parameters, and on the other hand, the chip design area can be saved, multiple application scenarios can be compatible, and the design cost can be reduced.

In the dynamic adjustment method for the cache mapping architecture described in the above embodiments, the original memory data in the cache storage module is processed in the following manner:

in some embodiments, the present embodiment can set all valid bit data of the cache memory module to an invalid state, that is, restore the cache memory module to an initial enable state.

In other embodiments, different from the above embodiments, this embodiment may reserve the memory data of the corresponding cache line, and even if the mapping architecture is adjusted, the memory data can be reused, which is beneficial to improving the data access efficiency, where in the following example, the memory size is 2048B, and the memory data of each cache set or cache line is specifically processed as follows:

1) on the premise that the cache length L is not changed:

1.1, when the number M of the cache groups is unchanged and the number N of the cache lines is increased, configuring the memory data of the newly increased cache line to be empty, maintaining the memory data of the original cache line, maintaining the tag data of the original cache line, and configuring the effective bit data to be 0.

For example, continuing with fig. 4a, the cache architecture parameters are { M ═ 4, N ═ 3, and L ═ 128}, taking the 0 th cache set of the cache memory module as an example, the 0 th set includes 3-way cache lines, by executing the method, 1-way cache lines are added to each cache set, and then N ═ 4, where the memory data and tag data of the 0 th cache line, the 1 st cache line, and the 2 nd cache line remain unchanged, and the memory data of the 3 rd cache line is empty.

It can be understood that, even if there is a gap between the new cache line and the original cache line in the physical circuit location, that is, the gap is not continuous, in this embodiment, the new cache line and the original cache line can be categorized into the same cache set in the software level through software logic, for example, the 3 rd cache line is not continuous with the 0 th cache line, the 1 st cache line or the 2 nd cache line, but the 0 th cache line, the 1 st cache line, the 2 nd cache line and the 3 rd cache line can be categorized into the same cache set in this embodiment.

1.2, when the number M of the cache groups is unchanged and the number N of the cache lines is reduced, configuring the memory data of the reduced cache lines to be empty, keeping the memory data of the original cache lines unchanged, keeping the tag data of the original cache lines unchanged, and configuring the effective bit data to be 0.

For example, continuing with fig. 4a, the cache architecture parameters are { M ═ 4, N ═ 3, and L ═ 128}, taking the 0 th cache set of the cache memory module as an example, the 0 th set includes 3-way cache lines, by executing the method, 1-way cache lines are subtracted in each cache set, and then N ═ 2, where the memory data and tag data of the 0 th cache line and the 1 st cache line remain unchanged, and the memory data of the 2 nd cache line is empty.

1.3, when the number of the cache lines N is not changed and the number of the cache groups M is increased, the memory data of the newly-added cache groups are configured to be empty. The memory data of each cache line in the original cache set is kept unchanged, but the hit judging module judges whether the tag data of each cache line in each original cache set conforms to the address mapping relation under the adjusted mapping architecture in sequence, and if so, the effective bit data is kept unchanged, namely the effective bit data is kept in an effective state. If not, the tag data is kept unchanged, but the valid bit data is configured to be in an invalid state.

For example, continuing with fig. 4a, the cache architecture parameters are { M ═ 4, N ═ 3, and L ═ 128 }. When 4 cache sets are added, the cache architecture parameters become { M ═ 8, N ═ 3, and L ═ 128}, and at this time, the mapping architecture between the cache storage module and the memory also changes, where the memory is divided into 2 regions, and each region has 8 memory blocks.

Taking the 0 th group cache line of fig. 4a as an example, assuming that the tag data of the 0 th group cache line is the memory area number address of the 0 th block of the 0 th area of the memory shown in fig. 4a, in the adjusted mapping architecture, the tag data of the original 0 th group cache line of fig. 4a is still the memory area number address of the 0 th block of the 0 th area of the memory under the adjusted mapping architecture, and the tag data of the original 0 th group cache line meets the address mapping relationship under the adjusted mapping architecture, so that the tag data and the valid bit data of the 0 th group cache line under the adjusted mapping architecture are kept unchanged.

However, taking the 1 st group of the 1 st cache line in fig. 4a as an example, assuming that the tag data of the 1 st group of the 1 st cache line is the memory area number address of the 1 st block in the 3 rd region of the memory shown in fig. 4a, in the adjusted mapping architecture, the 1 st block in the 3 rd region of the memory shown in fig. 4a becomes the 5 th block in the 1 st region of the memory, but according to the set associative mapping architecture, the memory data of the 5 th block in the 1 st region of the memory is the corresponding cache line of the 5 th group mapped under the adjusted mapping architecture, and the tag data of the 1 st group of the 1 st cache line in the original 1 st group of the 1 st cache line does not conform to the address mapping relationship under the adjusted mapping architecture, the valid bit data of the 1 st group of the 1 st cache line in the original 1 st group is configured to be in an invalid state.

1.4, when the number of the cache lines N is not changed and the number of the cache groups M is reduced, the memory data of the reduced cache groups are configured to be empty. The memory data of each cache line in the original cache group is kept unchanged, however, the hit judging module sequentially judges whether the tag data of each cache line in each original cache group conforms to the address mapping relationship under the adjusted mapping architecture, and if so, the valid bit data is kept unchanged, namely, the valid bit data is kept in a valid state. If not, the tag data is kept unchanged, but the valid bit data is configured to be in an invalid state, and the specific principle process is as described above and is not described herein again.

1.5, when the number M of the cache groups is increased and the number N of the cache lines is increased, the memory data of the newly added cache group is configured to be empty, the memory data of the newly added cache line is configured to be empty, and the memory data of each cache line in the original cache group remains unchanged, however, the hit judgment module sequentially judges whether the tag data of each cache line in each original cache group conforms to the address mapping relationship under the adjusted mapping architecture, and if so, the valid bit data remains unchanged, that is, the valid bit data remains in a valid state. If not, the tag data is kept unchanged, but the valid bit data is configured to be in an invalid state.

1.6, when the number M of the cache groups increases and the number N of the cache lines decreases, the memory data of the newly added cache group is configured to be empty, the memory data of the decreased cache line is configured to be empty, and the memory data of each cache line in the original cache group remains unchanged, but the hit determining module sequentially determines whether the tag data of each cache line in each original cache group conforms to the address mapping relationship under the adjusted mapping architecture, and if so, the valid bit data remains unchanged, that is, the valid bit data remains in a valid state. If not, the tag data is kept unchanged, but the valid bit data is configured to be in an invalid state.

1.7, when the number M of the cache groups is decreased and the number N of the cache lines is increased, the memory data of the decreased cache groups is configured to be empty, the memory data of the newly increased cache lines is configured to be empty, and the memory data of each cache line in the original cache group remains unchanged, but the hit determining module sequentially determines whether the tag data of each cache line in each original cache group conforms to the address mapping relationship under the adjusted mapping architecture, and if so, the valid bit data remains unchanged, that is, the valid bit data remains in a valid state. If not, the tag data is kept unchanged, but the valid bit data is configured to be in an invalid state.

1.8, when the number M of the cache sets is decreased and the number N of the cache lines is decreased, the memory data of the decreased cache sets is configured to be empty, the memory data of the decreased cache lines is configured to be empty, and the memory data of each cache line in the original cache set remains unchanged, but the hit determining module sequentially determines whether the tag data of each cache line in each original cache set conforms to the address mapping relationship under the adjusted mapping architecture, and if so, the valid bit data remains unchanged, that is, the valid bit data remains in a valid state. If not, the tag data is kept unchanged, but the valid bit data is configured to be in an invalid state.

2) On the premise that the cache length L is changed:

2.1, when the number M of the cache groups is not changed and the number N of the cache lines is increased/decreased: the data in the newly added cache line is empty; the data in the subtracted cache line is empty.

For the original cache line, when the cache length L increases: the data of the newly added part is empty, the original data is kept unchanged, the bit number of the tag data in the cache line is reduced according to the increase of the cache length L at the moment, the length of the Valid flag bit is increased, the original Valid flag bit is kept unchanged, the Valid flag bit position 0 is newly added, and the original data is Valid.

For example, referring to fig. 6b, the cache architecture parameters are { M ═ 2, N ═ 2, and L ═ 16}, where the length of Valid flag bit is 16 bits, each Valid flag bit corresponds to each byte memory, and the bit number tag of tag data is 8 bits. The memory is divided into 256 sections, each of which is divided into 2 memory blocks, each of which has a size of 16 bytes, and the tag data can be mapped to the memory block address of the corresponding memory block in the memory, for example, the memory block address of the 0 th block of the 0 th section of the memory is "h 00" (hexadecimal), the memory block address of the 1 st block of the 0 th section is "h 01", the memory block address of the 0 th block of the 1 st section is "h 02", and the memory block address of the 1 st block of the 1 st section is "h 03".

When the cache length L becomes 32 bytes and the number of cache lines in each cache set is increased by 2 lines, the cache architecture parameters are updated to { M ═ 2, N ═ 4, and L ═ 32}, the bit number tag of the tag data is updated to 7 bits, the length of the Valid flag bit is updated to 32 bits, and the newly added data is empty, which is specifically shown in fig. 6 b.

For the original cache line, when the cache length L is reduced: and the data of the subtraction part is invalid, the original data is kept unchanged, the bit number of the tag data in the cache line is increased according to the reduction of the cache length L at the moment, the Valid flag bit length is reduced, the remaining Valid flag bits are unchanged, and the remaining data is Valid.

2.2, when the number M of the cache groups is increased/decreased and the number N of the cache lines is not changed: the data of the cache line of the newly added cache group is null; the data of the cache lines of the subtracted cache groups are null; data in the cache line of the original cache group is kept unchanged, but whether the tag data of the cache line of each cache group with each newly increased cache length conforms to the address mapping relationship under the adjusted mapping architecture is judged, if yes, the Valid flag position in the cache line is 0, and the data is Valid; otherwise, indicating the position 1 by dirty and invalid data;

for the original cache line, when the cache length L increases: the data of the newly added part is empty, the original data is kept unchanged, the bit number of the tag data in the cache line is reduced according to the increase of the cache length L at the moment, the bit length of the Valid flag bit is increased, the original Valid flag bit is kept unchanged, the position 0 of the Valid flag is newly added, and whether the original data is Valid depends on the value of the dirty bit.

For the original cache line, when the cache length L is reduced: subtracting partial invalid data, keeping the original data unchanged, increasing the bit number of the tag data in the cache line according to the reduction of the cache length L, reducing the Valid flag bit length, keeping the remaining Valid flag bit unchanged, and determining whether the remaining partial Valid data depends on the value of dirty bit;

2.3, when the number M of the cache groups is increased/decreased and the number N of the cache lines is increased/decreased: the judgment of the two cases 2.1 and 2.2 is carried out in sequence.

In summary, with the embodiments described above, the data of the original cache line may be retained without reloading, which is beneficial to improving the data access efficiency.

In some embodiments, in order to make the adjusted cache architecture parameters more fit to the current application scenario, the cache architecture parameters may be further optimized in this embodiment, please refer to fig. 7a, and S31 includes: configuring multiple groups of cache architecture parameters, wherein the corresponding cache mapping architecture dynamic adjustment method S300 further comprises the following steps:

s33, sequentially acquiring the operation effect of the software program under the mapping architecture corresponding to each group of cache architecture parameters;

s34, determining optimal architecture parameters according to each operation effect;

and S35, sending the optimal architecture parameters to the cache controller, so that the cache controller dynamically adjusts the mapping architecture of the cache storage module to the memory according to the optimal architecture parameters.

By way of example and not limitation, the run effect is to characterize the efficiency of executing the software program under the adjusted mapping architecture, wherein the run effect may be characterized by other reasonable parameters such as run time.

By way of example and not limitation, the optimal architecture parameter is a cache architecture parameter corresponding to the highest efficiency when the host device executes the software program.

By adopting the method, the cache architecture parameters can be optimized, so that the main device can efficiently execute the software program under the adjusted mapping architecture.

In some embodiments, referring to fig. 7b, the operation effect includes an operation time for operating the software program, and S34 includes:

s341, searching the minimum running time in each running time;

and S342, determining the cache architecture parameter corresponding to the minimum running time as the optimal architecture parameter.

The embodiment can screen the cache architecture parameters by taking the running time as an evaluation factor, and is beneficial to accurately selecting the cache architecture parameters capable of improving the data access efficiency.

It should be noted that, in the foregoing embodiments, a certain order does not necessarily exist between the foregoing steps, and those skilled in the art can understand, according to the description of the embodiments of the present invention, that in different embodiments, the foregoing steps may have different execution orders, that is, may be executed in parallel, may also be executed interchangeably, and the like.

Referring to fig. 8, fig. 8 is a schematic circuit structure diagram of an electronic device according to an embodiment of the present invention, where the electronic device may be a device, a chip or an electronic product having logic calculation and analysis functions. As shown in fig. 8, the electronic device 800 includes one or more processors 81 and memory 82. In fig. 8, one processor 81 is taken as an example.

The processor 81 and the memory 82 may be connected by a bus or other means, and fig. 8 illustrates the connection by a bus as an example.

The memory 82, as a non-volatile computer-readable storage medium, may be used to store a non-volatile software program, a non-volatile computer-executable program, and modules, such as program instructions/modules corresponding to the dynamic adjustment method for the cache mapping architecture in the embodiment of the present invention. The processor 81 executes the functions of the respective modules or units of the cache mapping architecture dynamic adjustment method provided by the above-described method embodiments by running the nonvolatile software program, instructions, and modules stored in the memory 82.

The memory 82 may include high speed random access memory and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state memory device. In some embodiments, the memory 82 may optionally include memory located remotely from the processor 81, which may be connected to the processor 81 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.

The program instructions/modules are stored in the memory 82, and when executed by the one or more processors 81, perform the cache mapping architecture dynamic adjustment method in any of the above-described method embodiments.

An embodiment of the present invention further provides a non-volatile computer storage medium, where the computer storage medium stores computer-executable instructions, and the computer-executable instructions are executed by one or more processors, for example, one processor 81 in fig. 8, so that the one or more processors may execute the cache mapping architecture dynamic adjustment method in any method embodiment.

An embodiment of the present invention further provides a computer program product, where the computer program product includes a computer program stored on a non-volatile computer-readable storage medium, and the computer program includes program instructions, and when the program instructions are executed by an electronic device, the electronic device is enabled to execute any one of the cache mapping architecture dynamic adjustment methods.

The above-described embodiments of the apparatus or device are merely illustrative, wherein the unit modules described as separate parts may or may not be physically separate, and the parts displayed as module units may or may not be physical units, may be located in one place, or may also be distributed on a plurality of network module units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.

Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a general hardware platform, and certainly can also be implemented by hardware. Based on such understanding, the above technical solutions essentially or contributing to the related art may be embodied in the form of a software product, which can be stored in a computer-readable storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments or some parts of the embodiments.

Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; within the idea of the invention, also technical features in the above embodiments or in different embodiments may be combined, steps may be implemented in any order, and there are many other variations of the different aspects of the invention as described above, which are not provided in detail for the sake of brevity; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

25页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种缓存的处理方法、电子设备及计算机存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类