Live broadcast cache performance optimization method and system, electronic device and storage medium

文档序号:196289 发布日期:2021-11-02 浏览:31次 中文

阅读说明:本技术 直播缓存的性能优化方法、系统、电子装置及存储介质 (Live broadcast cache performance optimization method and system, electronic device and storage medium ) 是由 杨大维 徐小龙 于 2021-08-09 设计创作,主要内容包括:本发明公开了一种直播缓存的性能优化方法、系统、电子装置及存储介质,该方法包括:为直播应用提供调用接口,调用接口用于接收直播应用读缓存或写缓存的指令;对直播应用的每个直播频道,均分配一组内存块,内存块用于存储对应直播频道最新获取的直播数据;为每一个直播频道均提供一个对应的虚拟文件,虚拟文件用于将对应直播频道的直播数据进行封装;通过调用接口,接收直播应用的写缓存指令,打开虚拟文件,向虚拟文件对应的内存块内写入缓存数据;或者,通过调用接口,接收直播应用的读缓存指令,打开虚拟文件,从虚拟文件对应的内存块内读取缓存数据。本发明能够简化缓存的使用,提升直播的并发性能,降低直播延时。(The invention discloses a method, a system, an electronic device and a storage medium for optimizing the performance of live broadcast cache, wherein the method comprises the following steps: providing a calling interface for the live broadcast application, wherein the calling interface is used for receiving a command of reading cache or writing cache of the live broadcast application; allocating a group of memory blocks to each live broadcast channel applied to live broadcast, wherein the memory blocks are used for storing newly acquired live broadcast data corresponding to the live broadcast channels; providing a corresponding virtual file for each live channel, wherein the virtual file is used for packaging live data of the corresponding live channel; receiving a write cache instruction of live broadcast application through a calling interface, opening a virtual file, and writing cache data into a memory block corresponding to the virtual file; or receiving a cache reading instruction of the live broadcast application through a calling interface, opening the virtual file, and reading cache data from a memory block corresponding to the virtual file. The method and the device can simplify the use of the cache, improve the concurrency performance of live broadcasting and reduce the live broadcasting delay.)

1. A performance optimization method for a live broadcast cache is characterized by comprising the following steps:

providing a calling interface for a live broadcast application, wherein the calling interface is used for receiving a command of reading cache or writing cache of the live broadcast application;

allocating a group of memory blocks to each live channel of the live application, wherein the memory blocks are used for storing newly acquired live data of the corresponding live channel;

providing a corresponding virtual file for each live channel, wherein the virtual file is used for packaging live data of the corresponding live channel;

receiving a write cache instruction of the live broadcast application through the calling interface, opening the virtual file, and writing cache data into the memory block corresponding to the virtual file;

or, receiving a cache reading instruction of the live broadcast application through the calling interface, opening the virtual file, and reading cache data from the memory block corresponding to the virtual file.

2. The method of optimizing performance of a live cache of claim 1,

the writing of the cache data into the memory block corresponding to the virtual file includes:

acquiring a frame of data needing to be written in for the latest live broadcast data;

determining the position of a memory block to which a frame of data needing to be written should be written;

writing corresponding frame data into the corresponding memory block according to the position of the memory block;

and writing each frame of the latest live broadcast data into the memory block according to the time sequence.

3. The method of optimizing performance of a live cache of claim 2,

writing the corresponding frame of data in the corresponding memory block according to the memory block position includes:

acquiring a write index value of a frame data to be written and a first numerical value of the size of a frame group;

dividing the writing index value by the first numerical value to obtain a remainder of a quotient, wherein the remainder is used as a number position of a storage area of a memory block into which a frame of data to be written should be written;

and writing the frame data into the memory block corresponding to the serial number position.

4. The method of optimizing performance of a live cache of claim 3,

after writing the corresponding frame of data into the corresponding memory block according to the memory block position, the writing the cache data into the memory block corresponding to the virtual file further includes:

when writing-in frame data to a new memory block according to the frame array, setting the position of the written-in memory block to be a new frame type by using the frame array, recording the frame type of the memory block at the position, releasing the frame array from an old memory block to which the written live data points, and adding one to the write index value.

5. The method of optimizing performance of a live cache of claim 1,

the reading of the cache data from the memory block corresponding to the virtual file includes:

determining the position of a memory block to which a frame of data needing to be read belongs;

reading the frame data in the corresponding memory block according to the memory block position to which the frame data belongs;

and reading each frame of data of the live broadcast data from the corresponding memory block according to the time sequence.

6. The method of optimizing performance of a live cache of claim 5,

the determining the memory block location to which the frame of data needing to be read belongs includes:

recording a read index value of a frame of data to be read, wherein the initial value of the read index value is zero;

calculating a frame data of the read index value, and performing time difference between writing cache and reading cache to obtain delay data;

judging whether to skip frames to read the live broadcast data or not according to the delay data and a preset frame skipping threshold;

if so, determining the position corresponding to the frame data needing to be read after frame skipping;

if not, determining the position corresponding to the currently read frame data.

7. The method of optimizing performance of a live cache of claim 6,

determining the position corresponding to the data of the frame needing to be read after frame skipping comprises: acquiring adjacent frame data of a frame closest to the currently read frame data; assigning the position corresponding to the adjacent frame data to the reading index value, and calculating the position corresponding to the frame data to be read according to the reading index value and the frame data array;

after reading a frame of data from the memory block, the reading cache data from the memory block corresponding to the virtual file further includes: and adding one to the read index value to read the cache data from the memory block corresponding to the virtual file by the next frame data.

8. A system for optimizing performance of a live cache, comprising:

the application interface layer module is used for providing a calling interface for live broadcast application, and the calling interface is used for receiving a command of reading cache or writing cache of the live broadcast application;

the virtual file layer module is used for providing a corresponding virtual file for each live channel;

the memory channel layer module is used for allocating a group of memory blocks to each live channel of the live broadcast application, and the memory blocks are used for storing newly acquired live broadcast data of the corresponding live broadcast channel;

the data writing module is used for receiving a writing cache instruction of the live broadcast application through the calling interface, opening the virtual file and writing cache data into the memory block corresponding to the virtual file;

and the data reading module is used for receiving a reading cache instruction of the live broadcast application through the calling interface, opening the virtual file and reading cache data from the memory block corresponding to the virtual file.

9. An electronic device, comprising: a memory, and a processor, where the memory stores thereon a computer program operable on the processor, and when the processor executes the computer program, the method for optimizing performance of a live cache according to any one of claims 1 to 7 is implemented.

10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out a method for optimizing the performance of a live cache according to any one of claims 1 to 7.

Technical Field

The present invention relates to the field of cache optimization technologies, and in particular, to a method, a system, an electronic device, and a storage medium for optimizing performance of a live broadcast cache.

Background

With the development of science and technology, live broadcast becomes an indispensable part of our lives, such as online education, video conference, large-scale late meeting live broadcast, training live broadcast, event live broadcast and the like.

Due to the increase of network bandwidth, the number of audiences who participate in watching the live broadcast is increased, the concurrency of the live broadcast is increased, and the requirement on the time delay of the live broadcast is also increased.

Some existing live broadcast systems are not universal, read-write operation of an application layer is inconvenient, and live broadcast concurrency performance is poor.

Disclosure of Invention

The invention mainly aims to provide a performance optimization method, a system, an electronic device and a storage medium for live broadcast cache, and aims to solve the problems that in the prior art, a cache system is not universal, and read-write operation of an application layer is inconvenient, so that the concurrent performance of live broadcast is poor.

In order to achieve the above object, a first aspect of the present invention provides a method for optimizing performance of a live broadcast cache, including: providing a calling interface for a live broadcast application, wherein the calling interface is used for receiving a command of reading cache or writing cache of the live broadcast application; allocating a group of memory blocks to each live channel of the live application, wherein the memory blocks are used for storing newly acquired live data of the corresponding live channel; providing a corresponding virtual file for each live channel, wherein the virtual file is used for packaging live data of the corresponding live channel; receiving a write cache instruction of the live broadcast application through the calling interface, opening the virtual file, and writing cache data into the memory block corresponding to the virtual file; or, receiving a cache reading instruction of the live broadcast application through the calling interface, opening the virtual file, and reading cache data from the memory block corresponding to the virtual file.

Wherein the writing of the cache data into the memory block corresponding to the virtual file comprises: acquiring a frame of data needing to be written in for the latest live broadcast data; determining the position of a memory block to which a frame of data needing to be written should be written; writing corresponding frame data in the corresponding memory block according to the position of the memory block; and writing each frame of the latest live broadcast data into the memory block according to the time sequence.

Wherein writing the frame data in the memory block comprises: acquiring a write index value of a frame data to be written and a first numerical value of the size of a frame group; using the write index value and the first numerical value to obtain a remainder of quotient, wherein the remainder is used as a number position of a storage area of a memory block into which a frame of data to be written should be written; and writing the frame data into the memory block corresponding to the serial number position.

After writing a frame of data into the memory block, the writing cache data into the memory block corresponding to the virtual file further includes: when writing-in frame data to a new memory block according to the frame array, setting the position of the written-in memory block to be a new frame type by using the frame type array, recording the frame type of the memory block at the position, releasing the frame array from an old memory block to which the written live broadcast data points, and adding one to the write index value.

Wherein the reading the cache data from the memory block corresponding to the virtual file comprises: determining the position of a memory block to which a frame of data needing to be read belongs; reading the frame data in the corresponding memory block according to the memory block position to which the frame data belongs; and reading each frame of data of the live broadcast data from the corresponding memory block according to the time sequence.

Wherein, the determining the memory block position of the frame of data needing to be read includes: recording a read index value of a frame of data to be read, wherein the initial value of the read index value is zero; calculating a frame data of the read index value, and performing time difference between writing cache and reading cache to obtain delay data; judging whether to skip frames to read the live broadcast data or not according to the delay data and a preset frame skipping threshold; if so, determining the position corresponding to the frame data needing to be read after frame skipping; if not, determining the position corresponding to the currently read frame data.

Wherein, the determining the position corresponding to the data of the frame to be read after frame skipping includes: acquiring adjacent frame data of a frame closest to the currently read frame data; and assigning the position corresponding to the adjacent frame data to the reading index value, and calculating the position corresponding to the frame data to be read according to the reading index value and the frame data array. After reading a frame of data from the memory block, the reading cache data from the memory block corresponding to the virtual file further includes: and adding one to the read index value to read the cache data from the memory block corresponding to the virtual file by the next frame data.

A second aspect of the present application provides a performance optimization system for live broadcast caching, including: the application interface layer module is used for providing a calling interface for live broadcast application, and the calling interface is used for receiving a command of reading cache or writing cache of the live broadcast application; the memory channel layer module is used for allocating a group of memory blocks to each live channel of the live broadcast application, and the memory blocks are used for storing newly acquired live broadcast data of the corresponding live broadcast channel; the virtual file layer module is used for providing a corresponding virtual file for each live channel; the data writing module is used for receiving a writing cache instruction of the live broadcast application through the calling interface, opening the virtual file and writing cache data into the memory block corresponding to the virtual file; and the data reading module is used for receiving a reading cache instruction of the live broadcast application through the calling interface, opening the virtual file and reading cache data from the memory block corresponding to the virtual file.

A third aspect of the present application provides an electronic apparatus comprising: the live broadcast cache performance optimization method comprises a storage and a processor, wherein the storage is stored with a computer program capable of running on the processor, and the processor realizes any one of the live broadcast cache performance optimization methods when executing the computer program.

A fourth aspect of the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements a method for optimizing performance of a live cache as described in any one of the above.

The invention provides a method and a system for optimizing the performance of live broadcast cache, an electronic device and a storage medium, and has the advantages that: the unified calling interface is provided, the application layer where the live broadcast application is located can be conveniently used, the maintainability and the reliability of a live broadcast system are improved, the use of cache can be simplified, the concurrency of live broadcast is improved, and the live broadcast delay is reduced.

Drawings

In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.

Fig. 1 is a schematic flowchart of a performance optimization method for live broadcast caching according to an embodiment of the present application;

fig. 2 is a schematic flow chart illustrating a process of writing cache data into a memory block corresponding to a virtual file according to the performance optimization method for live caching in an embodiment of the present application;

fig. 3 is a schematic flowchart of all live broadcast data corresponding to one frame of data written in a memory block in a cache according to a performance optimization method for live broadcast caching in an embodiment of the present application;

fig. 4 is a schematic flowchart illustrating a process of reading cache data from a memory block corresponding to a virtual file according to a performance optimization method for live caching in an embodiment of the present application;

fig. 5 is a schematic flowchart illustrating a process of determining a read memory block of data of a frame according to a performance optimization method for live broadcast caching according to an embodiment of the present application;

fig. 6 is a block diagram illustrating a structure of a performance optimization system of a live broadcast cache according to an embodiment of the present application;

fig. 7 is a block diagram illustrating a structure of an electronic device according to an embodiment of the disclosure.

Detailed Description

In order to make the objects, features and advantages of the present invention more obvious and understandable, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.

Referring to fig. 1, a method for optimizing performance of a live broadcast cache includes:

s101, providing a calling interface for the live broadcast application, wherein the calling interface is used for receiving a command of reading cache or writing cache of the live broadcast application;

s102, distributing a group of memory blocks to each live channel of live broadcast application, wherein the memory blocks are used for storing newly acquired live broadcast data of the corresponding live broadcast channel;

s103, providing a corresponding virtual file for each live channel, wherein the virtual file is used for packaging live data of the corresponding live channel;

s104, receiving a write cache instruction of the live broadcast application through a calling interface, opening a virtual file, and writing cache data into a memory block corresponding to the virtual file;

and S105, receiving a reading cache instruction of the live broadcast application through a calling interface, opening a virtual file, and reading cache data from a memory block corresponding to the virtual file.

In step S101, at least four calling interfaces are respectively Open, Write, Read, and Close interfaces.

The Open interface is used for opening a virtual file, transmitting a live channel url as a parameter, and returning a virtual file ID after the virtual file is successfully opened.

The Read interface is used for reading the latest live channel data, transmitting a virtual file ID and returning a complete frame block data. When the application layer does not call the Read interface in time, the data is delayed sometimes, the live broadcast cache system can automatically skip the overdue data and return from the latest position point, and the low delay of live broadcast is ensured.

The Write interface is used to Write the latest live data, the incoming virtual file ID and a complete frame block.

The Close interface is used for closing the virtual file and releasing the resources.

In step S102, there is first a memory channel (MemChannel), where the memory channel has two fixed-size arrays, which are FData (frame data array) and FType (frame type array), respectively, and FType is an int type array, and records video frame types (e.g., I, P, B frames); FData is a Block pointer type array, pointing to a Block cache Block. The Block buffer Block buffers one complete frame of data.

The memory channel has a write index value, write _ idx, to record the write location of the video data, and needs to be converted to the index of the array by write _ idx when writing the data. If the size of the array is 100 and the current write _ idx is 120, the write position is 120 divided by 100 to get the remainder, i.e. the write position is 20, the 20 position of the FData array points to the new Block, and the Block in the original position is set as the free Block.

And the FType array records the frame type of FData to the position point and is used for searching the I frame when the frame is lost.

In step S103, a virtual file is provided by using the virtual file layer, and each user wants to read and write data of a live channel, needs to open a virtual file VirtualFile, and performs an operation through a file ID.

In step S104 and step S105, data transmission is performed through the network transmission layer, for example, transmission is performed in a chunck manner of HTTP, one frame data is packed into one chunck, and the receiving end knows that the frame data is one frame data and knows the frame type after receiving the one chunck, so that the receiving end does not need to perform alignment and frame type judgment of the frame data again, thereby improving the efficiency of the receiving end. When Chunck is transmitted, the last word of Chunck data is used to record the frame type. Adding the byte when the transmitting end transmits chunck data; when receiving data, the receiving end judges the frame type according to the byte, and when saving to Block, the byte is removed.

Because the network transmission layer transmits the live broadcast data according to the data frame and the frame type, the live broadcast delay can be reduced.

In this embodiment, the live data to be cached may be audio and video data encapsulated by a TS, or may be bare data encapsulated according to a certain format; in addition, the performance optimization method for the live broadcast cache provided by the embodiment may be arranged in a window system, a Linux system, a Unix system, and the like of the electronic device.

In this embodiment, a unified calling interface is provided, which facilitates the application layer where the live broadcast application is located to use, and improves the maintainability and reliability of the live broadcast system, thereby simplifying the use of the cache, improving the concurrency performance of the live broadcast, and reducing the live broadcast delay.

In addition, the transmission is carried out in a http chunck mode according to the video frames and the frame types, so that the CPU consumption of a receiving end for analyzing data and carrying out frame assembly is reduced, the system performance is improved, and the concurrence is improved.

Referring to fig. 2, in an embodiment, the writing the cache data into the memory block corresponding to the virtual file in step S104 includes:

s1041, acquiring a frame of data needing to be written in for the latest live broadcast data;

s1042, determining the position of a memory block to which the written data of one frame should be written;

s1043, writing corresponding frame data in the corresponding memory block according to the position of the memory block;

and S1044, writing each frame of the latest live broadcast data into the memory block according to the time sequence.

After opening a virtual file, the url of the incoming channel is used as the only parameter. And returning a file ID after the opening is successful, wherein the file ID is carried in subsequent operations. VirtualFile will associate a MemChannel.

And then calls the Write interface to Write a frame of data, and the input parameters are Block and frame type.

Referring to fig. 3, in an embodiment, the step S1043 of writing the frame data in the memory block includes:

s10431, obtaining a write index value of a frame data to be written and a first numerical value of the frame number group size;

s10432, using the write index value and the first numerical value to obtain a remainder of the quotient, and using the remainder as a number position of a storage area of a memory block into which the data of one frame to be written should be written;

s10433, writing the frame data into the memory block corresponding to the serial number position.

MemChanel determines the write location by write _ idx. Specifically, when step S10432 is implemented, for example, it is assumed that the FData array size is 100, and the current write _ idx is 120, the write position is 120 divided by 100 to obtain 20, and 20 is the number position of the storage area of the memory block to which the written frame data should be written.

In an embodiment, in step S104, after writing a frame of data into the memory block, writing cache data into the memory block corresponding to the virtual file further includes:

s1044 setting the location of the written memory block to a new frame type by using the frame type array when pointing the written frame data to the new memory block according to the frame array, recording the frame type of the memory block at the location, releasing the frame group from the old memory block to which the written live data points, and adding one to the write index value.

Referring to fig. 4, in an embodiment, in step S105, reading the cache data from the memory block corresponding to the virtual file includes:

s1051, determining the position of a memory block to which a frame of data needing to be read belongs;

s1052, reading a frame of data in the corresponding memory block according to the memory block position to which the frame of data belongs;

and S1053, reading each frame of data of the live broadcast data from the corresponding memory block according to the time sequence.

In this embodiment, a VirtualFile is opened with the url of the incoming channel as the only parameter. And returning a file ID after the opening is successful, wherein the file ID is carried in subsequent operations. VirtualFile will associate a MemChannel. The read interface is required to read one frame of data.

Referring to fig. 5, in an embodiment, the step S1052 of determining the memory block location of the frame data to be read includes:

s10521, recording a read index value of a frame of data needing to be read, wherein the initial value of the read index value is zero;

s10522, calculating a frame of data of the read index value, and performing time difference between write caching and read caching to obtain delay data;

s10523, judging whether to skip the frame to read the live broadcast data according to the delay data and a preset frame skipping threshold;

s10524, if yes, determining the position corresponding to the frame data which needs to be read after frame skipping;

s10525, if not, determining the position corresponding to the currently read frame data.

VirtualFile maintains a read index, read _ idx, with an initial value of 0. VirtualFile reads data to MemChannel via read _ idx.

In step S10523, MemChannel determines the delay time from read _ idx and write _ idx. If write _ idx minus read _ idx equals 10, this indicates a delay of 10 frames.

Judging whether the time delay is greater than a configured threshold value, and starting frame skipping if the time delay is greater than the configured threshold value; the frame skipping can always keep the low delay of live broadcasting, thereby reducing the delay of live broadcasting.

In an embodiment, in step S10524, determining a corresponding position of a frame of data that needs to be read after frame skipping includes: acquiring adjacent frame data of a frame closest to the currently read frame data; and assigning the corresponding position of the adjacent frame data to a reading index value, and calculating the corresponding position of the frame data to be read according to the reading index value and the frame data array.

When skipping a frame, an I-frame data closest to the write _ idx position is found and assigned to read _ idx.

In this embodiment, under the bad condition of network, adopt the mode of losing the frame fast, reduce live broadcast time delay, the mosaic can not appear simultaneously, promotes user experience.

When the position corresponding to the frame data to be read is calculated according to the read index value and the frame data array, the method determines that the position corresponding to the frame data to be written is the same as that when the write cache is acquired, and specifically may include: acquiring a second numerical value of the read write index value of the frame data and the size of the frame group; dividing the writing index value by the second numerical value to obtain a remainder of the quotient, wherein the remainder is used as the number position of the storage area of the memory block which is read by the read frame data; and reading all live broadcast data from the memory blocks corresponding to the numbering positions.

In step S105, after reading a frame of data from the memory block, the reading the cache data from the memory block corresponding to the virtual file further includes: s1054, adding one to the reading index value to read the cache data from the memory block corresponding to the virtual file by the next frame data.

Referring to fig. 6, in an embodiment, the present application further provides a performance optimization system for live broadcast cache, including: the system comprises an application interface layer module 1, a virtual file layer module 2, a memory channel layer module 3, a data writing module 4 and a data reading module 5; the application interface layer module 1 is used for providing a calling interface for the live broadcast application, and the calling interface is used for receiving a command of reading cache or writing cache of the live broadcast application; the virtual file layer module 2 is used for providing a corresponding virtual file for each live channel; the memory channel layer module 3 is configured to allocate a group of memory blocks to each live channel of the live broadcast application, where the memory blocks are configured to store live broadcast data newly acquired from the corresponding live broadcast channel; the data writing module 4 is used for receiving a writing cache instruction of the live broadcast application through a calling interface, opening a virtual file, and writing cache data into a memory block corresponding to the virtual file; the data reading module is used for receiving a reading cache instruction of the live broadcast application through a calling interface, opening a virtual file and reading cache data from a memory block corresponding to the virtual file.

In this embodiment, live data is transmitted through a network transport layer, the live data is transmitted from a live channel to a data writing module of a receiving end, for example, the live data is transmitted in a chunck mode of HTTP, one frame data is packed into one chunck, and the receiving end knows that the frame data is one frame data and knows the frame type without performing alignment of the frame data and frame type judgment again, so that the efficiency of the receiving end is improved. When Chunck is transmitted, the last word of Chunck data is used to record the frame type. Adding the byte when the transmitting end transmits chunck data; when receiving data, the receiving end judges the frame type according to the byte, and when saving to Block, the byte is removed.

Because the network transmission layer transmits the live broadcast data according to the data frame and the frame type, the live broadcast delay can be reduced.

The performance optimization system of the live broadcast cache of the embodiment provides a uniform calling interface, is convenient for an application layer where the live broadcast application is located to use, and improves maintainability and reliability of a live broadcast system, so that concurrency of live broadcast can be improved.

In one embodiment, the network transport layer module 4 comprises: the device comprises a first data writing unit, a writing memory determining unit and a second data writing unit; the first data writing unit is used for acquiring data of a frame needing to be written for the latest live broadcast data; the write-in memory determining unit is used for determining the position of a memory block to which a frame of data needing to be written should be written; the second data writing unit is used for writing corresponding frame data into the corresponding memory block according to the position of the memory block, and writing each frame of latest live broadcast data into the memory block according to the time sequence.

In one embodiment, the second data writing unit includes: the device comprises a numerical value acquisition subunit, a first calculation subunit and a live broadcast data writing unit; the numerical value obtaining subunit is configured to obtain a write index value of a frame of data to be written and a first numerical value of a frame group size; the first calculating subunit is configured to use a remainder of a quotient obtained by dividing the write index value by the first numerical value as a number position of a storage area of a memory block into which data of one frame to be written should be written; the live broadcast data writing unit is used for writing the frame data into the memory block corresponding to the serial number position.

The data writing module 4 further includes: and the frame type recording module is used for setting the position of the written memory block to be a new frame type by using the frame type array when the written frame data points to the new memory block according to the frame array, recording the frame type of the memory block at the position, releasing the frame group from the old memory block pointed by the written live data, and adding one to the write index value.

In one embodiment, the data reading module 5 includes: reading the memory determining unit and the second data reading unit; the reading memory determining unit is used for determining the memory block position to which a frame of data needing to be read belongs; the second data reading unit is configured to read one frame of data from the corresponding memory block according to the memory block location to which the one frame of data belongs, and read each frame of data of the live broadcast data from the corresponding memory block according to the time sequence.

In one embodiment, reading the memory determination unit includes: the device comprises a numerical value recording subunit, a second calculating subunit, a judging subunit and a position determining subunit; the numerical value recording subunit is used for recording a read index value of a frame of data to be read, and the initial value of the read index value is zero; the second calculating subunit is used for calculating a frame data of the read index value, and performing time difference between write caching and read caching to obtain delay data; the judging subunit is used for judging whether to read live broadcast data by frame skipping or not according to the delay data and a preset frame skipping threshold; the position determining subunit is used for determining the position corresponding to the frame data which needs to be read after frame skipping if the judging subunit judges that the frame skipping is required; and the judging subunit is further configured to determine a position corresponding to the currently read frame data if the judging subunit judges that no frame skipping is required.

In one embodiment, the location determining subunit is further to: acquiring adjacent frame data of a frame closest to the currently read frame data; and assigning the corresponding position of the adjacent frame data to a reading index value, and calculating the corresponding position of the frame data to be read according to the reading index value and the frame data array.

In an embodiment, the network transport layer module 4 further includes an accumulation unit, configured to add one to the read index value, so as to read the cached data from the memory block corresponding to the virtual file in the next frame of data.

An embodiment of the present application provides an electronic device, please refer to fig. 7, which includes: the memory 601, the processor 602, and a computer program stored on the memory 601 and executable on the processor 602, when the processor 602 executes the computer program, the performance optimization method of the live cache described in the foregoing is implemented.

Further, the electronic device further includes: at least one input device 603 and at least one output device 604.

The memory 601, the processor 602, the input device 603, and the output device 604 are connected by a bus 605.

The input device 603 may be a camera, a touch panel, a physical button, a mouse, or the like. The output device 604 may be embodied as a display screen.

The Memory 601 may be a high-speed Random Access Memory (RAM) Memory, or a non-volatile Memory (non-volatile Memory), such as a disk Memory. The memory 601 is used for storing a set of executable program code, and the processor 602 is coupled to the memory 601.

Further, an embodiment of the present application also provides a computer-readable storage medium, which may be disposed in the electronic device in the foregoing embodiments, and the computer-readable storage medium may be the memory 601 in the foregoing. The computer-readable storage medium has stored thereon a computer program which, when executed by the processor 602, implements the performance optimization method of the live cache described in the foregoing embodiments.

Further, the computer-readable storage medium may be various media that can store program codes, such as a usb disk, a removable hard disk, a Read-Only Memory 601 (ROM), a RAM, a magnetic disk, or an optical disk.

In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical division, and in actual implementation, there may be other divisions, for example, multiple modules or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or modules, and may be in an electrical, mechanical or other form.

The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.

In addition, functional modules in the embodiments of the present invention may be integrated into one processing module, or each of the modules may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.

The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.

It should be noted that, for the sake of simplicity, the above-mentioned method embodiments are described as a series of acts or combinations, but those skilled in the art should understand that the present invention is not limited by the described order of acts, as some steps may be performed in other orders or simultaneously according to the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no acts or modules are necessarily required of the invention.

In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.

In the above description, for a person skilled in the art, according to the idea of the embodiment of the present invention, there are variations in the specific implementation and application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

16页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种直播间交互方法、装置及直播服务器

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类