File pre-reading cache allocation method and device

文档序号:421239 发布日期:2021-12-21 浏览:5次 中文

阅读说明:本技术 一种文件预读缓存分配方法和装置 (File pre-reading cache allocation method and device ) 是由 张亚东 王帅阳 穆向东 于 2021-08-27 设计创作,主要内容包括:本发明公开了一种文件预读缓存分配方法和装置,方法包括:基于总预读缓存量、并发文件数量、和文件最大缓存量阈值确定现有预读缓存能否满足并发需求;响应于确定现有预读缓存不能满足并发需求而将文件最大缓存量阈值降低到总预读缓存量与并发文件数量之商,并进一步基于最小缓存确定文件最大缓存量阈值能否满足预读需求;响应于确定文件最大缓存量阈值能满足预读需求而为每个文件均提供满足降低过的文件最大缓存量阈值的预读缓存以并发地缓冲读取文件。本发明能够在高并发状态下均衡分配预读缓存,提高缓存效率和预读效果,增大读带宽,进而提高系统性能。(The invention discloses a file pre-reading cache allocation method and a device, wherein the method comprises the following steps: determining whether the existing pre-read cache can meet the concurrency requirement or not based on the total pre-read cache amount, the quantity of the concurrent files and the maximum cache amount threshold value of the files; in response to determining that the existing read-ahead cache cannot meet the concurrency requirement, reducing the file maximum cache amount threshold to a quotient of the total read-ahead cache amount and the quantity of the concurrent files, and further determining whether the file maximum cache amount threshold can meet the read-ahead requirement based on the minimum cache; in response to determining that the file maximum buffer size threshold can meet the read-ahead requirement, a read-ahead buffer is provided for each file that meets the reduced file maximum buffer size threshold to concurrently buffer read files. The invention can equally distribute the pre-reading cache in a high concurrency state, improve the cache efficiency and the pre-reading effect, increase the reading bandwidth and further improve the system performance.)

1. A file pre-reading cache allocation method is characterized by comprising the following steps:

determining whether the existing pre-read cache can meet the concurrency requirement or not based on the total pre-read cache amount, the quantity of the concurrent files and the maximum cache amount threshold value of the files;

in response to determining that the existing read-ahead cache cannot meet the concurrency requirement, reducing the file maximum buffer amount threshold to a quotient of the total read-ahead buffer amount and the number of the concurrent files, and further determining whether the file maximum buffer amount threshold can meet the read-ahead requirement based on a minimum cache;

providing a read-ahead cache for each file that meets the reduced maximum buffer size threshold for the file to concurrently buffer read files in response to determining that the maximum buffer size threshold for the file meets the read-ahead requirement.

2. The method of claim 1, wherein determining whether the existing read-ahead cache can meet the concurrency requirement comprises: determining whether the total read-ahead buffer amount exceeds a product of the number of concurrent files and the maximum buffer amount threshold for the files, determining that an existing read-ahead buffer can meet a concurrency requirement in response to the total read-ahead buffer amount exceeding the product of the number of concurrent files and the maximum buffer amount threshold for the files, and determining that the existing read-ahead buffer cannot meet the concurrency requirement in response to the total read-ahead buffer amount not exceeding the product of the number of concurrent files and the maximum buffer amount threshold for the files.

3. The method of claim 1, wherein a read-ahead cache satisfying the undegraded maximum buffer size threshold for the file is provided for each file to concurrently buffer read files in response to determining that an existing read-ahead cache can satisfy concurrent demand.

4. The method of claim 1, wherein determining whether the file maximum buffer level threshold can meet a read ahead requirement comprises: determining whether the file maximum cache amount threshold is greater than the minimum cache, determining that the file maximum cache amount threshold can meet a read-ahead requirement in response to the file maximum cache amount threshold being greater than the minimum cache, and determining that the file maximum cache amount threshold cannot meet the read-ahead requirement in response to the file maximum cache amount threshold not being greater than the minimum cache.

5. The method of claim 1, further comprising performing the steps of:

providing a read-ahead cache for all current concurrent files that meets the undegraded maximum buffer size threshold of the file to concurrently buffer read files and providing no read-ahead cache for newly added files to concurrently read files directly in response to determining that the maximum buffer size threshold of the file cannot meet the read-ahead requirement.

6. The method of claim 1, further comprising performing the steps of:

determining whether the existing pre-reading cache can meet the dynamic management requirement or not based on the total pre-reading cache amount, the pre-reading cache amount and the maximum file cache amount threshold value;

updating the file maximum buffer amount threshold when the number of concurrent files increases and continuously releasing all the read-ahead buffers of the hit files within a threshold waiting time in response to determining that the existing read-ahead buffers can meet the dynamic management requirement;

providing a read-ahead cache for the newly added file that satisfies the updated file maximum cache amount threshold to concurrently buffer read files in response to determining that the read-ahead cache remaining after the threshold wait time exceeds the file maximum cache amount threshold.

7. The method of claim 6, wherein determining whether the existing read-ahead cache can meet the dynamic management requirements comprises: determining whether the pre-read buffer amount exceeds a difference between the total pre-read buffer amount and the file maximum buffer amount threshold, determining that the existing pre-read buffer can meet a dynamic management requirement in response to the pre-read buffer amount exceeding the difference between the total pre-read buffer amount and the file maximum buffer amount threshold, and determining that the existing pre-read buffer cannot meet the dynamic management requirement in response to the pre-read buffer amount not exceeding the difference between the total pre-read buffer amount and the file maximum buffer amount threshold.

8. The method of claim 6, wherein determining that the pre-read cache remaining after a threshold latency time exceeds the file maximum cache amount threshold comprises: determining that the total read-ahead buffer amount minus the read-ahead buffer amount and the released read-ahead buffer amount exceed the file maximum buffer amount threshold after a threshold wait time.

9. The method of claim 1, wherein responsive to determining that the remaining read-ahead cache after a threshold wait time does not exceed the file maximum cache amount threshold, forcibly releasing read-ahead caches of all files for which read-ahead caches exceed the updated file maximum cache amount threshold to the updated file maximum cache amount threshold and providing a newly added file with read-ahead caches that meet the updated file maximum cache amount threshold to concurrently buffer read files.

10. A file read-ahead cache allocation apparatus, comprising:

a processor;

a controller storing program code executable by the processor, the processor executing the following steps when executing the program code:

determining whether the existing pre-read cache can meet the concurrency requirement or not based on the total pre-read cache amount, the quantity of the concurrent files and the maximum cache amount threshold value of the files;

in response to determining that the existing read-ahead cache cannot meet the concurrency requirement, reducing the file maximum buffer amount threshold to a quotient of the total read-ahead buffer amount and the number of the concurrent files, and further determining whether the file maximum buffer amount threshold can meet the read-ahead requirement based on a minimum cache;

providing a read-ahead cache for each file that meets the reduced maximum buffer size threshold for the file to concurrently buffer read files in response to determining that the maximum buffer size threshold for the file meets the read-ahead requirement.

Technical Field

The present invention relates to the field of cache, and more particularly, to a method and an apparatus for allocating a file pre-read cache.

Background

At present, the common pre-reading algorithm does not manage the allocation of the cache, and only limits the pre-reading range and further limits the length of the pre-reading cache according to the maximum pre-reading length and the minimum pre-reading length set by the system and the like. When the cache is not changed and the quantity of concurrent files is large, the phenomenon of cache resource competition occurs, so that the cache management is relatively disordered, and the pre-reading effect of the files is influenced. In the competition process of the cache resources, some files cannot obtain the cache resources, and the pre-read cache of some files reaches a limited maximum value, which affects the reasonable usability of the cache and reduces the high efficiency of the cache.

Aiming at the problems of uneven pre-read cache allocation and reduced cache efficiency in the prior art, no effective solution is available at present.

Disclosure of Invention

In view of this, an object of the embodiments of the present invention is to provide a method and an apparatus for allocating a file read-ahead cache, which can allocate the read-ahead cache in a balanced manner in a high concurrency state, improve cache efficiency and read-ahead effect, increase read bandwidth, and further improve system performance.

Based on the above object, a first aspect of the embodiments of the present invention provides a file pre-read cache allocation method, including the following steps:

determining whether the existing pre-read cache can meet the concurrency requirement or not based on the total pre-read cache amount, the quantity of the concurrent files and the maximum cache amount threshold value of the files;

in response to determining that the existing read-ahead cache cannot meet the concurrency requirement, reducing the file maximum cache amount threshold to a quotient of the total read-ahead cache amount and the quantity of the concurrent files, and further determining whether the file maximum cache amount threshold can meet the read-ahead requirement based on the minimum cache;

in response to determining that the file maximum buffer size threshold can meet the read-ahead requirement, a read-ahead buffer is provided for each file that meets the reduced file maximum buffer size threshold to concurrently buffer read files.

In some embodiments, determining whether the existing read-ahead cache can meet the concurrency requirement includes: determining whether a total read-ahead buffer amount exceeds a product of a number of concurrent files and a maximum buffer amount threshold for the files, determining that an existing read-ahead buffer can meet a concurrency requirement in response to the total read-ahead buffer amount exceeding the product of the number of concurrent files and the maximum buffer amount threshold for the files, and determining that the existing read-ahead buffer cannot meet the concurrency requirement in response to the total read-ahead buffer amount not exceeding the product of the number of concurrent files and the maximum buffer amount threshold for the files.

In some embodiments, a read-ahead cache that meets an undecomposed maximum buffer size threshold for a file is provided for each file to concurrently buffer read files in response to determining that the existing read-ahead cache can meet the concurrent demand.

In some embodiments, determining whether the file maximum buffer size threshold can meet the read ahead requirement comprises: the method includes determining whether a file maximum cache amount threshold is greater than a minimum cache, determining that the file maximum cache amount threshold can satisfy a read-ahead requirement in response to the file maximum cache amount threshold being greater than the minimum cache, and determining that the file maximum cache amount threshold cannot satisfy the read-ahead requirement in response to the file maximum cache amount threshold not being greater than the minimum cache.

In some embodiments, the method further comprises performing the steps of:

in response to determining that the file maximum buffer size threshold cannot meet the read-ahead requirement, providing a read-ahead buffer that meets the non-reduced file maximum buffer size threshold for all current concurrent files to concurrently buffer read files, and providing no read-ahead buffer for newly added files to concurrently read files directly.

In some embodiments, the method further comprises performing the steps of:

determining whether the existing pre-reading cache can meet the dynamic management requirement or not based on the total pre-reading cache amount, the pre-reading cache amount and the maximum file cache amount threshold value;

updating a file maximum buffer memory amount threshold when the quantity of the concurrent files is increased in response to determining that the existing read-ahead buffers can meet the dynamic management requirement, and continuously releasing all read-ahead buffers of the hit files within threshold waiting time;

providing the newly added file with a read-ahead cache satisfying the updated file maximum cache amount threshold to concurrently buffer read files in response to determining that the read-ahead cache remaining after the threshold latency time exceeds the file maximum cache amount threshold.

In some embodiments, determining whether the existing read-ahead cache can meet the dynamic management requirements includes: determining whether the pre-read buffer amount exceeds a difference between the total pre-read buffer amount and a file maximum buffer amount threshold, determining that the existing pre-read buffer can meet the dynamic management requirement in response to the pre-read buffer amount exceeding the difference between the total pre-read buffer amount and the file maximum buffer amount threshold, and determining that the existing pre-read buffer cannot meet the dynamic management requirement in response to the pre-read buffer amount not exceeding the difference between the total pre-read buffer amount and the file maximum buffer amount threshold.

In some implementations, determining that the pre-read cache remaining after the threshold latency time exceeds the file maximum cache amount threshold includes: a total read-ahead buffer amount minus a read-ahead buffer amount after a threshold wait time and a released read-ahead buffer amount exceeding a file maximum buffer amount threshold are determined.

In some embodiments, in response to determining that the remaining read-ahead cache after the threshold wait time does not exceed the maximum cache amount for the file, the read-ahead cache of the file for which all read-ahead caches exceed the updated maximum cache amount for the file is forcibly released to the updated maximum cache amount for the file, and the newly added file is provided with read-ahead caches that meet the updated maximum cache amount for the file to concurrently buffer read the file.

A second aspect of the embodiments of the present invention provides a file pre-read cache allocation apparatus, including:

a processor;

a controller storing program code executable by a processor, the processor executing the following steps when executing the program code:

determining whether the existing pre-read cache can meet the concurrency requirement or not based on the total pre-read cache amount, the quantity of the concurrent files and the maximum cache amount threshold value of the files;

in response to determining that the existing read-ahead cache cannot meet the concurrency requirement, reducing the file maximum cache amount threshold to a quotient of the total read-ahead cache amount and the quantity of the concurrent files, and further determining whether the file maximum cache amount threshold can meet the read-ahead requirement based on the minimum cache;

in response to determining that the file maximum buffer level threshold can satisfy the read ahead demand, a read ahead buffer satisfying the lowered file maximum buffer level threshold is provided for each file to concurrently buffer read files.

In some embodiments, determining whether the existing read-ahead cache can meet the concurrency requirement includes: determining whether a total read-ahead buffer amount exceeds a product of a number of concurrent files and a maximum buffer amount threshold for the files, determining that an existing read-ahead buffer can meet a concurrency requirement in response to the total read-ahead buffer amount exceeding the product of the number of concurrent files and the maximum buffer amount threshold for the files, and determining that the existing read-ahead buffer cannot meet the concurrency requirement in response to the total read-ahead buffer amount not exceeding the product of the number of concurrent files and the maximum buffer amount threshold for the files.

In some embodiments, a read-ahead cache that meets an undecomposed maximum buffer size threshold for a file is provided for each file to concurrently buffer read files in response to determining that the existing read-ahead cache can meet the concurrent demand.

In some embodiments, determining whether the file maximum buffer size threshold can meet the read ahead requirement comprises: the method includes determining whether a file maximum cache amount threshold is greater than a minimum cache, determining that the file maximum cache amount threshold can satisfy a read-ahead requirement in response to the file maximum cache amount threshold being greater than the minimum cache, and determining that the file maximum cache amount threshold cannot satisfy the read-ahead requirement in response to the file maximum cache amount threshold not being greater than the minimum cache.

In some embodiments, the method further comprises performing the steps of:

in response to determining that the file maximum buffer size threshold cannot meet the read-ahead requirement, providing a read-ahead buffer that meets the non-reduced file maximum buffer size threshold for all current concurrent files to concurrently buffer read files, and providing no read-ahead buffer for newly added files to concurrently read files directly.

In some embodiments, the method further comprises performing the steps of:

determining whether the existing pre-reading cache can meet the dynamic management requirement or not based on the total pre-reading cache amount, the pre-reading cache amount and the maximum file cache amount threshold value;

updating a file maximum buffer memory amount threshold when the quantity of the concurrent files is increased in response to determining that the existing read-ahead buffers can meet the dynamic management requirement, and continuously releasing all read-ahead buffers of the hit files within threshold waiting time;

providing the newly added file with a read-ahead cache satisfying the updated file maximum cache amount threshold to concurrently buffer read files in response to determining that the read-ahead cache remaining after the threshold latency time exceeds the file maximum cache amount threshold.

In some embodiments, determining whether the existing read-ahead cache can meet the dynamic management requirements includes: determining whether the pre-read buffer amount exceeds a difference between the total pre-read buffer amount and a file maximum buffer amount threshold, determining that the existing pre-read buffer can meet the dynamic management requirement in response to the pre-read buffer amount exceeding the difference between the total pre-read buffer amount and the file maximum buffer amount threshold, and determining that the existing pre-read buffer cannot meet the dynamic management requirement in response to the pre-read buffer amount not exceeding the difference between the total pre-read buffer amount and the file maximum buffer amount threshold.

In some implementations, determining that the pre-read cache remaining after the threshold latency time exceeds the file maximum cache amount threshold includes: a total read-ahead buffer amount minus a read-ahead buffer amount after a threshold wait time and a released read-ahead buffer amount exceeding a file maximum buffer amount threshold are determined.

In some embodiments, in response to determining that the remaining read-ahead cache after the threshold wait time does not exceed the maximum cache amount for the file, the read-ahead cache of the file for which all read-ahead caches exceed the updated maximum cache amount for the file is forcibly released to the updated maximum cache amount for the file, and the newly added file is provided with read-ahead caches that meet the updated maximum cache amount for the file to concurrently buffer read the file.

The invention has the following beneficial technical effects: according to the file pre-reading cache allocation method and device provided by the embodiment of the invention, whether the existing pre-reading cache can meet the concurrency requirement is determined based on the total pre-reading cache amount, the quantity of concurrent files and the maximum cache amount threshold value of the files; in response to determining that the existing read-ahead cache cannot meet the concurrency requirement, reducing the file maximum cache amount threshold to a quotient of the total read-ahead cache amount and the quantity of the concurrent files, and further determining whether the file maximum cache amount threshold can meet the read-ahead requirement based on the minimum cache; the technical scheme of providing the pre-reading cache meeting the reduced maximum file buffer storage quantity threshold value for each file to concurrently buffer and read the files in response to the fact that the maximum file buffer storage quantity threshold value can meet the pre-reading requirement is determined, the pre-reading cache can be distributed in a balanced mode in a high concurrency state, the cache efficiency and the pre-reading effect are improved, the reading bandwidth is increased, and the system performance is further improved.

Drawings

In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.

Fig. 1 is a schematic flow chart of a file pre-read cache allocation method provided by the present invention.

Detailed Description

In order to make the objects, technical solutions and advantages of the present invention more apparent, the following embodiments of the present invention are described in further detail with reference to the accompanying drawings.

It should be noted that all expressions using "first" and "second" in the embodiments of the present invention are used for distinguishing two entities with the same name but different names or different parameters, and it should be noted that "first" and "second" are merely for convenience of description and should not be construed as limitations of the embodiments of the present invention, and they are not described in any more detail in the following embodiments.

Based on the above object, a first aspect of the embodiments of the present invention provides an embodiment of a file read-ahead cache allocation method for allocating read-ahead caches in a balanced manner in a high concurrency state, so as to improve cache efficiency and read-ahead effect, increase read bandwidth, and further improve system performance. Fig. 1 is a schematic flow chart of a file read-ahead cache allocation method provided by the present invention.

The file pre-reading cache allocation method, as shown in fig. 1, includes the following steps:

step S101, determining whether the existing pre-reading cache can meet the concurrency requirement or not based on the total pre-reading cache amount, the quantity of the concurrent files and the threshold value of the maximum cache amount of the files;

step S103, in response to determining that the existing pre-read cache cannot meet the concurrency requirement, reducing the threshold of the maximum file caching amount to the quotient of the total pre-read caching amount and the quantity of the concurrent files, and further determining whether the threshold of the maximum file caching amount can meet the pre-read requirement based on the minimum cache;

step S105, in response to determining that the file maximum buffer level threshold satisfies the read-ahead requirement, providing a read-ahead buffer for each file that satisfies the lowered file maximum buffer level threshold to concurrently buffer read files.

According to the method, the cache resources are dynamically adjusted according to the concurrency number of the files and the total size of the pre-reading cache set by the system, the effect that each file has cache allocation is achieved, the utilization efficiency of the cache resources is improved, the allocation reasonability of the pre-reading cache resources is improved, and finally the whole read bandwidth of the files is improved.

It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), a Random Access Memory (RAM), or the like. Embodiments of the computer program may achieve the same or similar effects as any of the preceding method embodiments to which it corresponds.

The steps of a method or algorithm described in connection with the disclosure herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal.

In some embodiments, determining whether the existing read-ahead cache can meet the concurrency requirement includes: determining whether a total read-ahead buffer amount exceeds a product of a number of concurrent files and a maximum buffer amount threshold for the files, determining that an existing read-ahead buffer can meet a concurrency requirement in response to the total read-ahead buffer amount exceeding the product of the number of concurrent files and the maximum buffer amount threshold for the files, and determining that the existing read-ahead buffer cannot meet the concurrency requirement in response to the total read-ahead buffer amount not exceeding the product of the number of concurrent files and the maximum buffer amount threshold for the files.

In some embodiments, a read-ahead cache that meets an undecomposed maximum buffer size threshold for a file is provided for each file to concurrently buffer read files in response to determining that the existing read-ahead cache can meet the concurrent demand.

In some embodiments, determining whether the file maximum buffer size threshold can meet the read ahead requirement comprises: the method includes determining whether a file maximum cache amount threshold is greater than a minimum cache, determining that the file maximum cache amount threshold can satisfy a read-ahead requirement in response to the file maximum cache amount threshold being greater than the minimum cache, and determining that the file maximum cache amount threshold cannot satisfy the read-ahead requirement in response to the file maximum cache amount threshold not being greater than the minimum cache.

In some embodiments, the method further comprises performing the steps of: in response to determining that the file maximum buffer size threshold cannot meet the read-ahead requirement, providing a read-ahead buffer that meets the non-reduced file maximum buffer size threshold for all current concurrent files to concurrently buffer read files, and providing no read-ahead buffer for newly added files to concurrently read files directly.

In some embodiments, the method further comprises performing the steps of:

determining whether the existing pre-reading cache can meet the dynamic management requirement or not based on the total pre-reading cache amount, the pre-reading cache amount and the maximum file cache amount threshold value;

updating a file maximum buffer memory amount threshold when the quantity of the concurrent files is increased in response to determining that the existing read-ahead buffers can meet the dynamic management requirement, and continuously releasing all read-ahead buffers of the hit files within threshold waiting time;

providing the newly added file with a read-ahead cache satisfying the updated file maximum cache amount threshold to concurrently buffer read files in response to determining that the read-ahead cache remaining after the threshold latency time exceeds the file maximum cache amount threshold.

In some embodiments, determining whether the existing read-ahead cache can meet the dynamic management requirements includes: determining whether the pre-read buffer amount exceeds a difference between the total pre-read buffer amount and a file maximum buffer amount threshold, determining that the existing pre-read buffer can meet the dynamic management requirement in response to the pre-read buffer amount exceeding the difference between the total pre-read buffer amount and the file maximum buffer amount threshold, and determining that the existing pre-read buffer cannot meet the dynamic management requirement in response to the pre-read buffer amount not exceeding the difference between the total pre-read buffer amount and the file maximum buffer amount threshold.

In some implementations, determining that the pre-read cache remaining after the threshold latency time exceeds the file maximum cache amount threshold includes: a total read-ahead buffer amount minus a read-ahead buffer amount after a threshold wait time and a released read-ahead buffer amount exceeding a file maximum buffer amount threshold are determined.

In some embodiments, in response to determining that the remaining read-ahead cache after the threshold wait time does not exceed the maximum cache amount for the file, the read-ahead cache of the file for which all read-ahead caches exceed the updated maximum cache amount for the file is forcibly released to the updated maximum cache amount for the file, and the newly added file is provided with read-ahead caches that meet the updated maximum cache amount for the file to concurrently buffer read the file.

The various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the disclosure herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as software or hardware depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the disclosed embodiments of the present invention.

The various illustrative logical blocks, modules, and circuits described in connection with the disclosure herein may be implemented or performed with the following components designed to perform the functions described herein: a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination of these components. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP, and/or any other such configuration.

And dynamically setting the cache according to the concurrency number of the files, and dynamically setting the maximum pre-reading cache amount threshold of the modified files. The cache dynamic management mainly releases the cache, and eliminates the cache exceeding the threshold in the file pre-reading. And if the cache is not eliminated to the specified range within the specified time, carrying out forced elimination. The following further illustrates embodiments of the invention in terms of specific examples.

Setting an atomic operand file _ num, a maximum threshold global quantity all _ bytes of file pre-reading quantity, a readahead _ bytes of a file pre-reading buffer quantity, holding timeout readahead _ t, and setting the minimum pre-reading size to be readahead _ min. If all _ rbytes × file _ num > readahead _ size, then all _ rbyte ← eadahead _ size ÷ file _ num.

If all _ rbyte < readahead _ min, readahead _ min is all _ rbytes. Setting a min _ all _ bytes parameter, wherein all _ bytes must be more than or equal to min _ all _ bytes, if all _ bytes < min _ all _ bytes, the newly added file is not pre-read when concurrence is carried out, and a direct-reading mode is selected. Here, all _ byte is a global variable that is effective for the entire file system to set the read-ahead cache maximum threshold for each file, and readahead _ size is the total amount of cache set by the system for read-ahead.

Each file counts the current read-ahead cache size read-bytes in real time, the file can be read ahead all the time until the upper limit of the file read-ahead cache is reached, and when the file read ahead has hit processing, the read-ahead flow can be triggered again to carry out data read ahead. The file read ahead margin left is all _ bytes-readahead _ size.

If left > readahead _ min, the file read-ahead length readahead _ len (left/readahead _ min l readahead _ min); if left < readahead _ min, the file is no longer preread. And after new files are added and concurrent, dynamically adjusting the pre-reading cache of the files, releasing the cache exceeding the threshold value by trim, and supplementing the released cache into the newly added files.

After a new concurrent file is added, the all _ rbyte at this time is adjusted, and the file pre-read cache exceeding the all _ rbyte is released. Specifically, at this time, the file is not read in advance, redundant cache is not released first, the cache is slowly released when the file is hit, and the file is read in advance when the cache amount is smaller than all _ byte; if the pre-read buffer memory of the file is not released to be less than or equal to all _ rbyte after the specified time readahead _ t, the file is forcibly released, the buffer memory with the length of the tail (readahead _ size-all _ rbyte) is released to the top of the rest linked list, and the rest linked list is eliminated from the tail according to the time.

Furthermore, the method disclosed according to an embodiment of the present invention may also be implemented as a computer program executed by a CPU, and the computer program may be stored in a computer-readable storage medium. The computer program, when executed by the CPU, performs the above-described functions defined in the method disclosed in the embodiments of the present invention. The above-described method steps and system elements may also be implemented using a controller and a computer-readable storage medium for storing a computer program for causing the controller to implement the functions of the above-described steps or elements.

It can be seen from the foregoing embodiments that, in the file pre-read cache allocation method provided in the embodiments of the present invention, whether the current pre-read cache can meet the concurrency requirement is determined based on the total pre-read cache amount, the number of concurrent files, and the maximum cache amount threshold of the files; in response to determining that the existing read-ahead cache cannot meet the concurrency requirement, reducing the file maximum cache amount threshold to a quotient of the total read-ahead cache amount and the quantity of the concurrent files, and further determining whether the file maximum cache amount threshold can meet the read-ahead requirement based on the minimum cache; the technical scheme of providing the pre-reading cache meeting the reduced maximum file buffer storage quantity threshold value for each file to concurrently buffer and read the files in response to the fact that whether the maximum file buffer storage quantity threshold value can meet the pre-reading requirement is determined, the pre-reading cache can be distributed in a balanced mode in a high concurrency state, the cache efficiency and the pre-reading effect are improved, the reading bandwidth is increased, and the system performance is further improved.

It should be particularly noted that, the steps in the embodiments of the file read-ahead cache allocation method described above may be mutually intersected, replaced, added, and deleted, so that these reasonable permutation and combination transformations of the file read-ahead cache allocation method also belong to the scope of the present invention, and should not limit the scope of the present invention to the described embodiments.

In view of the foregoing, a second aspect of the embodiments of the present invention provides an embodiment of a file read-ahead cache allocation apparatus for allocating read-ahead caches in a balanced manner in a high concurrency state, so as to improve cache efficiency and read-ahead effect, increase read bandwidth, and further improve system performance. The device comprises:

a processor;

a controller storing program code executable by a processor, the processor executing the following steps when executing the program code:

determining whether the existing pre-read cache can meet the concurrency requirement or not based on the total pre-read cache amount, the quantity of the concurrent files and the maximum cache amount threshold value of the files;

in response to determining that the existing read-ahead cache cannot meet the concurrency requirement, reducing the file maximum cache amount threshold to a quotient of the total read-ahead cache amount and the quantity of the concurrent files, and further determining whether the file maximum cache amount threshold can meet the read-ahead requirement based on the minimum cache;

in response to determining that the file maximum buffer size threshold satisfies the read-ahead requirement, a read-ahead buffer satisfying the lowered file maximum buffer size threshold is provided for each file to concurrently buffer read files.

In some embodiments, determining whether the existing read-ahead cache can meet the concurrency requirement includes: determining whether a total read-ahead buffer amount exceeds a product of a number of concurrent files and a maximum buffer amount threshold for the files, determining that an existing read-ahead buffer can meet a concurrency requirement in response to the total read-ahead buffer amount exceeding the product of the number of concurrent files and the maximum buffer amount threshold for the files, and determining that the existing read-ahead buffer cannot meet the concurrency requirement in response to the total read-ahead buffer amount not exceeding the product of the number of concurrent files and the maximum buffer amount threshold for the files.

In some embodiments, a read-ahead cache that meets an undecomposed maximum buffer size threshold for a file is provided for each file to concurrently buffer read files in response to determining that the existing read-ahead cache can meet the concurrent demand.

In some embodiments, determining whether the file maximum buffer size threshold can meet the read ahead requirement comprises: the method includes determining whether a file maximum cache amount threshold is greater than a minimum cache, determining that the file maximum cache amount threshold can satisfy a read-ahead requirement in response to the file maximum cache amount threshold being greater than the minimum cache, and determining that the file maximum cache amount threshold cannot satisfy the read-ahead requirement in response to the file maximum cache amount threshold not being greater than the minimum cache.

In some embodiments, the method further comprises performing the steps of: in response to determining that the file maximum buffer size threshold cannot meet the read-ahead requirement, providing a read-ahead buffer that meets the non-reduced file maximum buffer size threshold for all current concurrent files to concurrently buffer read files, and providing no read-ahead buffer for newly added files to concurrently read files directly.

In some embodiments, the method further comprises performing the steps of:

determining whether the existing pre-reading cache can meet the dynamic management requirement or not based on the total pre-reading cache amount, the pre-reading cache amount and the maximum file cache amount threshold value;

updating a file maximum buffer memory amount threshold when the quantity of the concurrent files is increased in response to determining that the existing read-ahead buffers can meet the dynamic management requirement, and continuously releasing all read-ahead buffers of the hit files within threshold waiting time;

providing the newly added file with a read-ahead cache satisfying the updated file maximum cache amount threshold to concurrently buffer read files in response to determining that the read-ahead cache remaining after the threshold latency time exceeds the file maximum cache amount threshold.

In some embodiments, determining whether the existing read-ahead cache can meet the dynamic management requirements includes: determining whether the pre-read buffer amount exceeds a difference between the total pre-read buffer amount and a file maximum buffer amount threshold, determining that the existing pre-read buffer can meet the dynamic management requirement in response to the pre-read buffer amount exceeding the difference between the total pre-read buffer amount and the file maximum buffer amount threshold, and determining that the existing pre-read buffer cannot meet the dynamic management requirement in response to the pre-read buffer amount not exceeding the difference between the total pre-read buffer amount and the file maximum buffer amount threshold.

In some implementations, determining that the pre-read cache remaining after the threshold latency time exceeds the file maximum cache amount threshold includes: a total read-ahead buffer amount minus a read-ahead buffer amount after a threshold wait time and a released read-ahead buffer amount exceeding a file maximum buffer amount threshold are determined.

In some embodiments, in response to determining that the remaining read-ahead cache after the threshold wait time does not exceed the maximum cache amount for the file, the read-ahead cache of the file for which all read-ahead caches exceed the updated maximum cache amount for the file is forcibly released to the updated maximum cache amount for the file, and the newly added file is provided with read-ahead caches that meet the updated maximum cache amount for the file to concurrently buffer read the file.

The apparatuses and devices disclosed in the embodiments of the present invention may be various electronic terminal devices, such as a mobile phone, a Personal Digital Assistant (PDA), a tablet computer (PAD), a smart television, and the like, or may be large-scale terminal devices, such as an apparatus, and therefore the scope of protection disclosed in the embodiments of the present invention should not be limited to a specific type of apparatus and device. The client disclosed by the embodiment of the invention can be applied to any one of the electronic terminal devices in the form of electronic hardware, computer software or a combination of the electronic hardware and the computer software.

It can be seen from the foregoing embodiments that, in the file pre-read cache allocation apparatus provided in the embodiments of the present invention, based on the total pre-read cache amount, the number of concurrent files, and the maximum cache amount threshold of a file, it is determined whether the current pre-read cache can meet the concurrent requirement; in response to determining that the existing read-ahead cache cannot meet the concurrency requirement, reducing the file maximum cache amount threshold to a quotient of the total read-ahead cache amount and the quantity of the concurrent files, and further determining whether the file maximum cache amount threshold can meet the read-ahead requirement based on the minimum cache; the technical scheme of providing the pre-reading cache meeting the reduced maximum file buffer amount threshold for each file to concurrently buffer and read the file in response to the fact that the maximum file buffer amount threshold is determined to meet the pre-reading requirement is responded, the pre-reading cache can be distributed in a balanced mode in a high concurrency state, the cache efficiency and the pre-reading effect are improved, the reading bandwidth is increased, and the system performance is further improved.

It should be particularly noted that the above-mentioned embodiment of the apparatus adopts the embodiment of the file read-ahead cache allocation method to specifically describe the working process of each module, and those skilled in the art can easily think that these modules are applied to other embodiments of the file read-ahead cache allocation method. Of course, since the steps in the embodiment of the file pre-read cache allocation method may be mutually intersected, replaced, added, or deleted, these reasonable permutation and combination transformations should also belong to the scope of the present invention, and should not limit the scope of the present invention to the embodiment.

The embodiment of the invention also can comprise corresponding computer equipment. The computer device comprises a memory, at least one processor and a computer program stored on the memory and executable on the processor, the processor performing any of the above methods when executing the program.

The memory, as a non-volatile computer-readable storage medium, may be used to store a non-volatile software program, a non-volatile computer-executable program, and modules, such as program instructions/modules corresponding to the file pre-read cache allocation method in this embodiment of the present application. The processor executes various functional applications and data processing of the device by running the nonvolatile software program, instructions and modules stored in the memory, that is, the file pre-read cache allocation method of the above method embodiment is realized.

The memory may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the device, and the like. Further, the memory may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, the memory optionally includes memory located remotely from the processor, and such remote memory may be coupled to the local module via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.

Finally, it should be noted that, as will be understood by those skilled in the art, all or part of the processes of the methods of the above embodiments may be implemented by a computer program, which may be stored in a computer-readable storage medium, and when executed, may include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), a Random Access Memory (RAM), or the like. Embodiments of the computer program may achieve the same or similar effects as any of the preceding method embodiments to which it corresponds.

The foregoing is an exemplary embodiment of the present disclosure, but it should be noted that various changes and modifications could be made herein without departing from the scope of the present disclosure as defined by the appended claims. The functions, steps and/or actions of the method claims in accordance with the disclosed embodiments described herein need not be performed in any particular order. Furthermore, although elements of the disclosed embodiments of the invention may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated.

Those of ordinary skill in the art will understand that: the discussion of any embodiment above is meant to be exemplary only, and is not intended to intimate that the scope of the disclosure, including the claims, of embodiments of the invention is limited to these examples; within the idea of an embodiment of the invention, also technical features in the above embodiment or in different embodiments may be combined and there are many other variations of the different aspects of an embodiment of the invention as described above, which are not provided in detail for the sake of brevity. Therefore, any omissions, modifications, substitutions, improvements, and the like that may be made without departing from the spirit and principles of the embodiments of the present invention are intended to be included within the scope of the embodiments of the present invention.

13页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:域名解析缓存方法、DNS服务器及计算机可读存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类