Memory cache management method, multimedia server and computer storage medium

文档序号:1435295 发布日期:2020-03-20 浏览:34次 中文

阅读说明:本技术 内存缓存管理方法、多媒体服务器及计算机存储介质 (Memory cache management method, multimedia server and computer storage medium ) 是由 陈安庆 王日红 张晓渠 于 2018-09-12 设计创作,主要内容包括:本发明实施例公开一种内存缓存管理方法、多媒体服务器及计算机存储介质,该方法包括:接收内存申请请求;根据所述内存申请请求获取当前访问位置之前的设置大小的目标页面;确定获取到所述目标页面时,使用所述目标页面存放与所述内存申请请求对应的数据,并将所述目标页面插入所述当前访问位置之后。(The embodiment of the invention discloses a memory cache management method, a multimedia server and a computer storage medium, wherein the method comprises the following steps: receiving a memory application request; acquiring a target page with a set size before the current access position according to the memory application request; and when the target page is determined to be obtained, storing data corresponding to the memory application request by using the target page, and inserting the target page behind the current access position.)

1. A memory cache management method is characterized by comprising the following steps:

receiving a memory application request;

acquiring a target page with a set size before the current access position according to the memory application request;

and when the target page is determined to be obtained, storing data corresponding to the memory application request by using the target page, and inserting the target page behind the current access position.

2. The memory cache management method according to claim 1, wherein after obtaining the target page with the set size before the current access position according to the memory application request, the method further comprises:

and when the target page is determined not to be acquired, acquiring a cache corresponding to the memory application request according to a set memory recovery algorithm through an Operating System (OS).

3. The memory cache management method according to claim 1, wherein the receiving a memory application request comprises:

receiving a file pre-reading request, wherein the file pre-reading request carries size information of a memory block needing to be issued to a hard disk; or the like, or, alternatively,

and receiving a file reading request, wherein the file reading request carries the set page size information.

4. The memory cache management method according to claim 1, wherein after the storing the data corresponding to the memory application request by using the target page, the method further comprises:

updating a reference count of the target page.

5. The memory cache management method according to claim 1, wherein before the obtaining of the target page with the set size before the current access position according to the memory application request, the method further comprises:

judging whether the memory application request is a memory application request aiming at the audio and video file, and determining whether the current heat value of the audio and video file is smaller than a set heat range;

the obtaining of the target page with the set size before the current access position according to the memory application request includes:

and when the current heat value of the audio and video file is determined to be smaller than the set heat range, acquiring a target page with a set size before the current access position according to the memory application request.

6. The memory cache management method according to claim 5, wherein after determining whether the memory application request is a memory application request for an audio/video file, the method further comprises:

and when the memory application request is a memory application request aiming at the audio and video file and the current heat value of the audio and video file is determined to meet the set heat range, obtaining the cache corresponding to the memory application request through an OS (operating system) according to a set memory recovery algorithm.

7. The memory cache management method according to claim 5, wherein after determining whether the memory application request is a memory application request for an audio/video file, the method further comprises:

and when the memory application request is determined to be a memory application request aiming at a non-audio/video file, obtaining a cache corresponding to the memory application request through an OS according to a set memory recovery algorithm.

8. The memory cache management method according to claim 5, wherein when it is determined that the memory application request is a memory application request for an audio/video file, acquiring a target page of a set size before a current access position according to the memory application request includes:

when the memory application request is determined to be a memory application request aiming at an audio/video file with a first definition, a target page with a first set size before the current access position is obtained according to the memory application request;

when the memory application request is determined to be a memory application request of an audio and video file with a second definition, a target page with a second set size before the current access position is obtained according to the memory application request, wherein the first definition is higher than the second definition, and the first set size is larger than the second set size.

9. A multimedia server comprising a processor and a memory for storing a computer program operable on the processor; wherein the content of the first and second substances,

the processor is configured to execute the memory cache management method according to any one of claims 1 to 8 when the computer program is executed.

10. A computer storage medium, in which a computer program is stored, which, when executed by a processor, implements the memory cache management method of any one of claims 1 to 8.

Technical Field

The present invention relates to the field of computer technologies, and in particular, to a memory cache management method, a multimedia server, and a computer storage medium.

Background

At present, when an audio/video file is played, the audio/video file is usually read into a memory from a disk, and after the audio/video file is read, the memories are not immediately released. When a cache miss occurs, the old cache needs to be swapped out, and then the swapped out memories are used for reading and writing new files.

However, the increase of the memory capacity is often far smaller than the increase of the hard disk capacity, so that cache replacement is always generated, the requirement of video reading and writing on time delay is high, when the memory is insufficient, synchronous cache recovery needs to be triggered to release the memory many times, and the synchronous waiting is fatal to a scene with high time delay requirement; for the implementation of each current operating system, cache recycling is a uniform action, and whether the LFU algorithm or the LRU algorithm is specifically adopted for management, the LFU algorithm is a uniform management idea, so that the problem that the cache recycling is triggered due to insufficient memory to influence user experience is particularly serious in a multimedia server.

Disclosure of Invention

In order to solve the existing technical problems, embodiments of the present invention provide a memory cache management method, a multimedia server, and a computer storage medium, which can reduce cache recycling consumption and reduce file reading delay.

In order to achieve the above purpose, the technical solution of the embodiment of the present invention is realized as follows:

a memory cache management method comprises the following steps: receiving a memory application request; acquiring a target page with a set size before the current access position according to the memory application request; and when the target page is determined to be obtained, storing data corresponding to the memory application request by using the target page, and inserting the target page behind the current access position.

After obtaining the target page with the set size before the current access position according to the memory application request, the method further includes: and when the target page is determined not to be acquired, acquiring a cache corresponding to the memory application request according to a set memory recovery algorithm through an Operating System (OS).

Wherein, the receiving the memory application request includes: receiving a file pre-reading request, wherein the file pre-reading request carries size information of a memory block needing to be issued to a hard disk; or receiving a file reading request, wherein the file reading request carries the set page size information.

After the data corresponding to the memory application request is stored by using the target page, the method further includes: updating a reference count of the target page.

Before obtaining the target page with the set size before the current access position according to the memory application request, the method further includes: judging whether the memory application request is a memory application request aiming at the audio and video file, and determining whether the current heat value of the audio and video file is smaller than a set heat range; the obtaining of the target page with the set size before the current access position according to the memory application request includes: and when the current heat value of the audio and video file is determined to be smaller than the set heat range, acquiring a target page with a set size before the current access position according to the memory application request.

After judging whether the memory application request is a memory application request for an audio/video file, the method further comprises the following steps: and when the memory application request is a memory application request aiming at the audio and video file and the current heat value of the audio and video file is determined to meet the set heat range, obtaining the cache corresponding to the memory application request through an OS (operating system) according to a set memory recovery algorithm.

After judging whether the memory application request is a memory application request for an audio/video file, the method further comprises the following steps: and when the memory application request is determined to be a memory application request aiming at a non-audio/video file, obtaining a cache corresponding to the memory application request through an OS according to a set memory recovery algorithm.

When the memory application request is determined to be a memory application request for an audio/video file, a target page with a set size before a current access position is acquired according to the memory application request, and the method comprises the following steps: when the memory application request is determined to be a memory application request aiming at an audio/video file with a first definition, a target page with a first set size before the current access position is obtained according to the memory application request; when the memory application request is determined to be a memory application request of an audio and video file with a second definition, a target page with a second set size before the current access position is obtained according to the memory application request, wherein the first definition is higher than the second definition, and the first set size is larger than the second set size.

A multimedia server comprising a processor and a memory for storing a computer program operable on the processor; the processor is configured to execute the memory cache management method according to any embodiment of the present application when the processor runs the computer program.

A computer storage medium, in which a computer program is stored, and which, when executed by a processor, implements a memory cache management method according to any embodiment of the present application.

In the memory cache management method provided in the foregoing embodiment, by receiving the memory application request, obtaining the target page with the set size before the current access position according to the memory application request, and determining to obtain the target page, storing the data corresponding to the memory application request by using the target page, and inserting the target page into the current access position, when it is determined that the memory application is required, the target page with the set size before the current access position can be directly replaced, and compared with a known method of obtaining the cache by recovering and releasing an old cache page through an operating system by using a uniform cache recovery algorithm, the method for managing the memory cache according to the embodiment stores new file data by using the directly replaced target page, can implement cache recycling without system call, and greatly reduce the consumption of the operating system for cache swapping, the problem of poor user experience caused by time delay is also reduced.

Drawings

Fig. 1 is an architecture diagram of an operating system of a memory cache management method according to an embodiment of the present invention;

FIG. 2 is a flowchart illustrating a method for memory cache management according to an embodiment of the present invention;

FIG. 3 is a flowchart illustrating a memory cache management method according to another embodiment of the present invention;

FIG. 4 is a flowchart illustrating a method for memory cache management according to another embodiment of the present invention;

fig. 5 is a flowchart of a memory cache management method according to an embodiment of the present invention;

FIG. 6 is a timing diagram illustrating client interaction with an OS according to an embodiment of the present invention;

fig. 7 is a schematic structural diagram of a multimedia server according to an embodiment of the present invention.

Detailed Description

The technical scheme of the invention is further elaborated by combining the drawings and the specific embodiments in the specification. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.

Before further detailed description of the embodiments of the present invention, terms and expressions referred to in the embodiments of the present invention are described, and the terms and expressions referred to in the embodiments of the present invention are applicable to the following explanations.

1) A Page (Page) generally divides a memory into certain units, called pages for short, in memory management.

2) The page cache, in an operating system, file reading and writing generally refers to reading and writing the cache in a memory, and then synchronizing with a hard disk through a certain mechanism, because the memory reading and writing efficiency is higher than that of the hard disk, the reading and writing performance is ensured. The memory used for caching is generally called page caching, and since the memory is generally managed by using a page as a unit, for example, the memory in linux is called page cache.

3) And pre-reading, wherein when a file is read, a pre-reading mode is generally used to ensure that a medium block submitted to the hard disk is a large block, the pre-reading means that hard disk data is read in advance and put into a page cache, and the subsequent reading can hit the cache.

4) And a zero copy (sendfile), when the file is sent through the network, the file is not copied from the kernel mode to the user mode, but is directly sent from the kernel mode to the network card, and then the network card is sent to the opposite terminal device.

5) And cache recovery, when an operating system caches pages, a cache recovery mechanism exists for a server mainly based on file reading along with the increase of cache due to the limitation of memory space.

6) The Least Recently Used (LRU) and Least Frequently Used (LFU) refer to memory reclamation algorithms for cache management and reclamation, which is a linked list manner for managing and aging caches from the perspective of a page.

7) In the One shot mode, from the statistics, most of the files are accessed in a sequential playing mode, and according to the characteristics of video playing, a long tail effect exists, that is, a small number of hot videos occupy less cache but a large number of service people, but most of the files are accessed only once in a period of time, the number of service people is small, the cache occupies a large amount, and the One shot mode is the One shot mode.

8) Radix tree, a data structure, is currently used in large quantities in managing the cache of a certain file, i.e. the way to manage the cache from the perspective of a single file.

As shown in fig. 1, in an embodiment, an architecture diagram of an application scenario adopting the memory cache management method provided by the embodiment of the present invention is provided, where the architecture includes a client 100 and a multimedia server 200, the client 100 is connected to the multimedia server 200 through a network, the client 100 sends a file playing instruction, after receiving the file playing instruction, the multimedia server 200 reads a slice file of a corresponding file into a memory, sends data hitting a memory cache according to an access request of the client 100 to a network card, and the network card sends the data to a client device according to a specified network transmission protocol. In the embodiment of the invention, the services of the type are collectively called a model combining disk reading and package sending. The memory cache management method provided by the embodiment of the invention can be applied to the model combining disk reading and package sending, wherein the multimedia server 200 reads the slice file into the memory, when the memory is determined to be insufficient, the page before the current access position is acquired for later access to use through the radix tree of the file cache management structure, and then the part of the memory page is used for storing the data read from the hard disk through pre-reading. Because the page before the current access position is the accessed cache, the page can not be accessed again in a short time, the multimedia server 200 obtains the memory page which can not be accessed again in a short time based on the position before the current access position and directly recycles the memory page, the fact that a few G large files are played can be guaranteed to only occupy 2-5M of a very small part of memory, and in a media server with large bandwidth and large flow, consumption of an operating system caused by cache exchange is greatly reduced.

In an embodiment, the multimedia server 200 is a CDN server in a scenario supporting RTSP on-demand services, and in internet IPTV television services, the RTSP on-demand services need to be supported, and the basic service model is to read a film source stored in the CDN server, package the film source into an RTP packet carried on UDP or TCP, and send the RTP packet out from a network port of the CDN server device, where the RTSP on-demand services are a model combining a typical disk reading and a packet sending.

In another embodiment, the multimedia server 200 is a CDN server in a scenario supporting HLS on-demand services, and in the internet OTT services, the HLS on-demand services are supported, a multimedia file fragment source stored on the CDN is read out according to an HLS fragment format requirement, and is loaded on the TCP in a HTTP downloading manner and sent to a requesting terminal, so that the HLS on-demand services are also a typical model combining disk reading and package sending.

In another embodiment, the multimedia server 200 is a CDN server in a scenario supporting DASH on-demand services, and supports DASH on-demand services in internet OTT services, a multimedia file source stored in the CDN is read according to a DASH fragmentation format requirement to support HTTP downloading, and is loaded on TCP to be sent to a requesting terminal, so that the DASH on-demand services are also a typical model combining disk reading and packet sending.

In another embodiment, the multimedia server 200 is a CDN server supporting HPD on-demand service in a scenario, and supports HPD large file downloading on-demand service on internet OTT service, reads a multimedia file source stored in the CDN to support HTTP downloading, and loads the multimedia file source on TCP to send to a requesting terminal, and loads the multimedia file source on TCP to send to the requesting terminal, so that the HPD on-demand service is also a typical model combining disk reading and package sending.

Referring to fig. 2, a flowchart of a memory cache management method according to an embodiment of the present invention is shown, where the memory cache management method includes the following steps:

step 101, receiving a memory application request;

103, acquiring a target page with a set size before the current access position according to the memory application request;

and 105, when the target page is determined to be acquired, storing data corresponding to the memory application request by using the target page, and inserting the target page behind the current access position.

Here, the multimedia server may receive the memory application request, where the multimedia server reads a corresponding slice file from the hard disk to the memory according to a play instruction of the multimedia file sent by the client, and receives the memory application request when it is determined that the memory is insufficient. In a specific application, the multimedia server may further set an interface for file access heat, the multimedia server reads a corresponding slice file from the hard disk to the memory according to a play instruction of the multimedia file sent by the client, and when the interface for file access heat is called to predict that the current heat of the corresponding multimedia file is a non-hot file, the multimedia server determines to receive the memory application request.

The multimedia server may obtain the target page with the set size before the current access position according to the memory application request, where the target page before the corresponding page is determined to be obtained through the page in the radix tree used by the file cache management structure, such as Linux, when the current access file has an offset position. And when the multimedia server determines to acquire the target page, the multimedia server stores the slice data read from the hard disk and corresponding to the memory application request by using the part of the target page, and inserts the target page into the current access position. When the data of the target page is new, an Input Output (IO) direct read can be initiated. Therefore, equivalent to directly obtaining the target page before the current access position for replacement, some position parameters related to the target page can be inserted into the cache of the corresponding file by modifying, and the like, and the direct memory replacement is adopted, so that the operation of a linked list in a radix tree is avoided, and the consumption of a cpu is reduced.

In the above embodiment of the present invention, when the multimedia server determines that a memory needs to be applied, the multimedia server may directly recover a target page with a set size before a current access position, instead of using the operating system to obtain a required memory according to a uniform cache recovery manner, and store data by using the directly recovered target page, which is equivalent to implementing cyclic use of a page cache.

Referring to fig. 3, in an embodiment, after obtaining a target page with a set size before a current access position according to the memory application request, the method for managing a memory cache according to an embodiment of the present invention further includes:

and 107, when the target page is determined not to be acquired, acquiring the cache corresponding to the memory application request according to a set memory recovery algorithm through the operating system OS.

Here, the set memory reclamation algorithm is a linked list method for managing aged caches from the viewpoint of a page, and is an LRU or LFU algorithm for managing and reclaiming caches. The OS acquires the cache corresponding to the memory application request according to the set memory recovery algorithm means that, in the process of reading a file from a disk into a memory by an operating system, when a request that the cache is hit occurs, the cache is swapped out from an old cache by using a uniform LRU or LFU algorithm, and then the swapped out memory is used for reading and writing a new file.

In the above embodiment of the present invention, when the direct recovery mode fails to apply for the memory and the target page cannot be obtained, the OS may obtain the memory from the cache of the corresponding CPU, and meanwhile, the high efficiency and the success rate of obtaining the memory are ensured.

In one embodiment, step 101, receiving a memory application request includes:

receiving a file pre-reading request, wherein the file pre-reading request carries size information of a memory block needing to be issued to a hard disk; or receiving a file reading request, wherein the file reading request carries the set page size information.

When a file is read, the entries of the application memory mainly comprise two entries, namely a pre-reading interface, the size of a block issued to the hard disk can be increased through pre-reading, and the IO performance is ensured; the other is a common reading interface, and common reading can be applied according to the page size, for example, Linux defaults to 4 k.

In the above embodiments of the present invention, no matter through the pre-read interface or the ordinary read interface, the target page between the current access positions can be directly recycled and obtained for replacement. Compared with the existing method for calculating the memory to be swapped out by using a unified memory recycling algorithm, for example, the memory to be swapped out is calculated by using an LRU algorithm, the LRU linked list needs to be traversed to release, and in order to accelerate traversal, the LRU linked list is split into an active linked list and an inactive linked list, so that for a media server with the memory of dozens to hundreds of G, the linked list is longer, the performance consumption is very high during traversal acquisition, and when the memory is insufficient, the time for triggering the memory recycling algorithm to apply for the memory is uncontrollable. The embodiment of the invention aims at applying for a large memory block in a pre-reading mode or applying for a small memory block in a common reading mode, can configure a memory application mode for directly replacing a target page with a set size acquired before the current access position, and swap out the target page from a cache before the file, and when swapping out, the position of the target page in a linked list is not required to be modified, and the target page for storing new data is inserted into the cache corresponding to the file by modifying the position parameter of the page, so that the operation on the linked list can be avoided, and the consumption on a system is reduced; when the memory is replaced, batch caching can be carried out, and when the memory is applied again, the memory can be applied from the replaced memory, so that the memory can be directly recycled, and a few G large files can only occupy 2-5M of the memory, so that the consumption of caching when the memory is replaced is greatly reduced, and the time for applying the memory is more controllable.

In an embodiment, in the memory cache management method, after the storing, by using the target page, the data corresponding to the memory application request, the method further includes:

updating a reference count of the target page.

In the embodiment of the invention, the cache recycling mechanism of the system is not changed, that is, the mechanism for releasing the cache by the OS according to the set memory recycling algorithm is not changed, and when the memory application mode of directly replacing the target page with the set size obtained before the current access position is adopted, the replaced target page is used for storing new data, and the reference count of the target page is updated, so that the target page in the radix tree can be prevented from being recycled by the cache recycling mechanism of the system when being reused. Here, updating the reference count of the target page may be to increase the reference count of the corresponding target page, which may avoid performance degradation due to locking, and decrease the reference count after the IO is initiated to read the target page, where the position of the page pointing to the file is the latest. Here, after updating the reference count of the target page, updating data of other related management cache pages, such as page identifiers, may also be included.

In an embodiment, referring to fig. 4, in the memory cache management method, before obtaining a target page with a set size before a current access position according to the memory application request, the method further includes:

step 102, judging whether the memory application request is a memory application request aiming at an audio and video file, and determining whether the current heat value of the audio and video file is smaller than a set heat range;

the obtaining of the target page with the set size before the current access position according to the memory application request includes:

step 1051, if the memory application request is a memory application request for an audio/video file, and it is determined that the current heat value of the audio/video file is smaller than the set heat range, a target page with a set size before the current access position is obtained according to the memory application request.

Further, after determining whether the memory application request is a memory application request for an audio/video file, the method further includes:

step 1052, obtaining the cache corresponding to the memory application request according to a set memory recycling algorithm by the OS when the memory application request is a memory application request for an audio/video file and it is determined that the current heat value of the audio/video file satisfies a set heat range.

Here, the media server may select, according to whether a read file corresponding to the memory application request is an audio/video file and a current heat of the audio/video file, a memory application mode in which a target page with a set size is acquired before a current access position for direct replacement or a memory recovery mode in which an OS acquires a cache according to a set memory recovery algorithm. The multimedia server can set an interface for accessing the file with the heat, determine the drama, movie, ball game and the like of a hot spot for the file with the heat capable of being predicted, for example, according to the on-demand record and the first page display of an Electronic Program Guide (EPG), place the predicted heat calculation and the corresponding time attribute in a slice file, and apply the current heat value to the audio/video file with the heat range, so as to obtain the memory by adopting a memory recovery mode of obtaining the cache through an OS according to the set memory recovery algorithm; otherwise, the memory is acquired by adopting a memory application mode of directly replacing the target page with the set size acquired before the current access position. Aiming at the audio and video files with the heat value meeting the set heat range, because the number of the access persons of the audio and video files with the high heat value is large in the corresponding time period, the OS is adopted to acquire the memory recovery mode of the cache according to the set memory recovery algorithm, so that the cache of the files with the large access times can be in the memory as much as possible, and the replacement of the cache is reduced. Here, the determining that the current heat value of the audio/video file meets the set heat range may be determining whether the current heat value of the audio/video file meets the set heat range according to related information of the heat value carried in a switching file corresponding to the audio/video file.

It should be noted that the heat of the file is within a certain time range, for example, for a set of a new television scenario, it becomes almost unattended after a certain period of time, and here, through the interface of the file access heat, the heat value of the slice file corresponding to the corresponding file is calculated and updated in real time based on the set heat value calculation rule, and the heat value and the corresponding time attribute are saved in the switching file. For a burst hot file, with the rapid increase of the access times, a mode of acquiring the memory from a memory recovery mode in which a set memory recovery algorithm is used to acquire the cache can be also used, and a mode of acquiring the memory from a memory application mode in which a target page with a set size is acquired before the current access position is used for direct replacement is switched to, so that the cache is ensured to be in the memory as much as possible.

In one embodiment, in the memory cache management method, when it is determined that the memory application request is a memory application request for an audio/video file, acquiring a target page with a set size before a current access position according to the memory application request includes:

when the memory application request is determined to be a memory application request aiming at an audio/video file with a first definition, a target page with a first set size before the current access position is obtained according to the memory application request;

when the memory application request is determined to be a memory application request of an audio and video file with a second definition, a target page with a second set size before the current access position is obtained according to the memory application request, wherein the first definition is higher than the second definition, and the first set size is larger than the second set size.

Here, when memory is to be applied, the available cache, such as the first few M of the current access location, may be searched according to the cache search location that is introduced by the user state. For the playing requests of audio and video files with different definitions, the sending efficiency of the data packets from the hard disk reading data to the memory retransmission packet is different, that is, the number difference of the pages to be sent in a certain time is also very large. In the embodiment of the invention, when the memory is acquired based on the memory application mode of directly replacing the target page with the set size acquired before the current access position aiming at the playing request of the client to the audio/video files with different definitions, the application positions of the target page with the set size are correspondingly different, namely, the positions needing to be recovered can be adjusted according to the code rate of the audio/video files. Taking the first definition as high definition as an example, a large number of pages are sent within a certain time, for example, at a rate of 20M/s, then a target page needs to be applied at a position relatively farther away from the current access position, for example, a position 20M before the current access position, so as to avoid that a page which may be cached when the distance is close is not yet released by the network card, taking the second definition as standard definition as an example, the number of pages sent within a certain time is relatively small, so that a target page can be applied at a position relatively close to the current access position, for example, a position 2M before the current access position, and certainly is sent by the network card, and the page can be used approximately. It should be noted that, here, the memory application request for the audio/video file with the first definition and the memory application request for the audio/video file with the second definition may refer to memory application requests for different audio/video files with different definitions, or memory application requests for file sections with different definitions of the same audio/video file. The audio/video files with the first definition and the second definition respectively correspond to different code rates of the audio/video files, and the first definition is high definition, and the second definition is standard definition, which respectively correspond to different definition ranges, rather than a specific definition value.

In order to further understand the implementation principle of the memory cache management method provided in the embodiment of the present invention, the following specifically takes a media server of a Linux system as an example, and describes a flow of the memory cache management method, please refer to fig. 5, where the Linux system uses a radix tree to manage cache pages of a single file view, pointers of the cache pages are stored in nodes of radix, each file has an individual inode, the inode finds a corresponding cache radix tree through address _ space, and when a user requires to obtain a page from a cache, the cache management method obtains the page from the radix tree, and includes the following steps:

step S11, obtaining a file reading request, and receiving a memory application request when the memory is determined to be insufficient;

step S12, determining whether the current request file is a hotspot file; if not, executing steps S13-S15, if yes, executing step S16;

step S13, setting the memory application mode as a replacement mode, and under the replacement mode, acquiring a target page with a set size before the current access position according to the memory application request; the replacement mode is a memory application mode for directly replacing a target page with a set size obtained before the current access position to obtain a memory;

step S14, judging whether the target page is acquired or not;

step S15, when the target page is determined to be acquired, storing data corresponding to the memory application request by using the target page, and increasing the reference count of the target page after the target page is inserted into the current access position; when it is determined that the target page is not acquired, executing step S16;

step S16, setting the memory application mode as a recovery mode, and under the recovery mode, obtaining a cache corresponding to the memory application request through an Operating System (OS) according to a set memory recovery algorithm; the recovery mode is a short for memory recovery mode for obtaining the cache by adopting a set memory recovery algorithm.

And step S17, pre-reading the cache acquired through the replacement mode or the recovery mode, and returning a file reading result to the client.

The memory cache management method can be implemented by configuring the os behavior applying for the memory through the set interface call, such as a/proc or/sys interface. Meanwhile, for each file, because the memory searching positions are different, an interface for setting the searching positions can be provided for a user to use, and a default searching range can also be used. By means of interface calling, the adaptation range of the memory cache management method can be enlarged, similar interfaces can be realized to realize corresponding functions aiming at open-source unix or other unix-like oss and closed-source-including oss such as Windows, and the memory cache management method can be realized in os as kernel characteristics or as a kernel module and loaded into a kernel.

Referring to fig. 6, a timing diagram of interaction between a client and an OS of a multimedia server is shown, where the OS returns a file handle based on a file reading request from the client, the client queries the heat of a file through a set interface of file access heat according to the file handle, the OS sets a memory mode according to the heat of the file, and when the file heat is less than a set heat range, the memory mode is set to a memory application mode in which a target page with a set size is obtained before a current access position for direct replacement, and the OS receives a request for reading or sending the file from the client, directly obtains the target page before the current access position for replacement and cyclic use, and returns a reading or sending result to the client. When the file heat value is larger than the set heat range or the target page is not acquired, setting the memory mode as a memory recovery mode for acquiring the cache by adopting the set memory recovery algorithm, releasing the cache by the OS through the LRU or LFU algorithm, and replacing the old cache for reading and writing the new file.

Taking the example that the memory cache management method is applied to a CDN server to support RTSP on-demand service, when the memory cache management method is used to read a disk and send a package for a file play request sent by a client, the performance comparison data of the CDN server is shown in the following table one and table two:

table one

Figure BDA0001797470530000121

Table two

Figure BDA0001797470530000132

Figure BDA0001797470530000141

Comparing the table one with the table two, it can be seen that the table one represents that the memory cache management method provided by the embodiment of the present invention is loaded, under the 93G performance flow, the CPU system occupation is only about 25%, and the idle is more than 45%, and is relatively stable; table two shows that the memory cache management method provided by the embodiment of the present invention is not loaded, and similarly under the 93G performance flow, the CPU system occupancy is only about 57% -61%, and the idle is below 15%, and often fluctuates below 10%.

In one embodiment, a memory cache management device is provided, which may be, for example, an interface for setting a memory mode, and is configured to receive a memory application request; acquiring a target page with a set size before the current access position according to the memory application request; and when the target page is determined to be obtained, storing data corresponding to the memory application request by using the target page, and inserting the target page behind the current access position.

And the memory cache management device is further configured to, when it is determined that the target page is not acquired, acquire, by an operating system OS, a cache corresponding to the memory application request according to a set memory reclamation algorithm.

The memory cache management device is configured to receive a memory application request, and may include: receiving a file pre-reading request, wherein the file pre-reading request carries size information of a memory block needing to be issued to a hard disk; or receiving a file reading request, wherein the file reading request carries the set page size information.

The memory cache management device is further configured to update a reference count of the target page after the target page is used to store the data corresponding to the memory application request.

The memory cache management device is further configured to, before the target page with the set size before the current access position is obtained according to the memory application request, determine whether the memory application request is a memory application request for an audio/video file, and determine whether a current heat value of the audio/video file is smaller than a set heat range; the obtaining of the target page with the set size before the current access position according to the memory application request includes: and when the current heat value of the audio and video file is determined to be smaller than the set heat range, acquiring a target page with a set size before the current access position according to the memory application request.

The memory cache management device is further used for judging whether the memory application request is a memory application request aiming at the audio and video file, corresponding to the memory application request aiming at the audio and video file, and determining that when the current heat value of the audio and video file meets the set heat range, the cache corresponding to the memory application request is obtained through an OS according to the set memory recovery algorithm.

The memory cache management device is further configured to, after determining whether the memory application request is a memory application request for an audio/video file, obtain, by an OS according to a set memory recovery algorithm, a cache corresponding to the memory application request when determining that the memory application request is a memory application request for a non-audio/video file.

The memory cache management device is configured to obtain a target page with a set size before a current access position according to the memory application request, and may include: when the memory application request is determined to be a memory application request aiming at an audio/video file with a first definition, a target page with a first set size before the current access position is obtained according to the memory application request; when the memory application request is determined to be a memory application request of an audio and video file with a second definition, a target page with a second set size before the current access position is obtained according to the memory application request, wherein the first definition is higher than the second definition, and the first set size is larger than the second set size.

It should be noted that: the memory cache management device and the memory cache management method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are detailed in the method embodiments and are not described herein again.

Referring to fig. 7, the multimedia server according to an embodiment of the present invention further provides a multimedia server, which includes a processor 201 and a storage medium 202 for storing a computer program capable of running on the processor 201, where the processor 201 is configured to execute the steps of the memory cache management method according to any embodiment of the present application when running the computer program. Here, the processor 201 and the storage medium 202 do not refer to a corresponding number of one, but may be one or more. The multimedia server further includes a memory 203, a network interface 204, and a system bus 205 connecting the processor 201, the memory 203, the network interface 204, and the storage medium 202. The storage medium 202 stores therein an operating system and a memory cache management apparatus for implementing the memory cache management method provided by the embodiment of the present invention, and the processor 201 is configured to improve computing and control capabilities and support the operation of the entire multimedia server. The memory 203 is used to provide an environment for operating the memory cache management method in the storage medium 202, and the network interface 204 is used to perform network communication with the client, and receive or send data, for example, receive a file reading request sent by the client, return a file reading result to the client, and the like.

The embodiment of the present invention further provides a computer storage medium, for example, a memory including a computer program stored therein, where the computer program is executable by a processor to perform the steps of the memory cache management method provided in any embodiment of the present application. The computer storage medium can be FRAM, ROM, PROM, EPROM, EEPROM, Flash Memory, magnetic surface Memory, optical disk, or CD-ROM; or may be various devices including one or any combination of the above memories.

The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. The scope of the invention is to be determined by the scope of the appended claims.

19页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:使用管理数据输入/输出接口将外部PHY设备连接到MAC设备

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类