File reading method and system based on mapping technology

文档序号:1889402 发布日期:2021-11-26 浏览:4次 中文

阅读说明:本技术 基于映射技术的文件读取方法及系统 (File reading method and system based on mapping technology ) 是由 陈晟豪 于 2021-08-31 设计创作,主要内容包括:本发明提出一种基于映射技术的文件读取方法及系统,该方法包括:若接收到数据读取指令,获取目标数据对应的索引;根据所述目标数据对应的索引,判断预设映射区是否存在所述目标数据,预设映射区位于内存中,预设映射区用于预先存储原始数据,原始数据为缓存文件中的数据经过预设处理得到;若预设映射区存在目标数据,则从预设映射区中读取出目标数据对应的原始数据;根据目标数据对应的原始数据,读取目标数据。本发明实施例通过建立预设映射区,并建立预设映射区与缓存文件中数据的对应关系,当需要对数据进行读取时,不需要对数据进行预处理,直接在预设映射区中进行查找即可,从而节省了数据读取时间。(The invention provides a file reading method and a file reading system based on a mapping technology, wherein the method comprises the following steps: if a data reading instruction is received, acquiring an index corresponding to target data; judging whether a preset mapping area has the target data or not according to the index corresponding to the target data, wherein the preset mapping area is positioned in a memory and is used for storing original data in advance, and the original data is obtained by performing preset processing on data in a cache file; if the preset mapping area has target data, reading original data corresponding to the target data from the preset mapping area; and reading the target data according to the original data corresponding to the target data. According to the embodiment of the invention, the preset mapping area is established, the corresponding relation between the preset mapping area and the data in the cache file is established, when the data needs to be read, the data does not need to be preprocessed, and the data is directly searched in the preset mapping area, so that the data reading time is saved.)

1. A file reading method based on a mapping technology is characterized by comprising the following steps:

s1, if a data reading instruction for reading target data is received, acquiring an index corresponding to the target data;

s2, judging whether the target data exist in a preset mapping area according to the index corresponding to the target data, wherein the preset mapping area is the original data in the memory for pre-storing the preset cache file;

s3, if the target data exists in the preset mapping area, reading original data corresponding to the target data from the preset mapping area;

and S4, performing reverse processing of the preset processing based on the original data corresponding to the target data, thereby reading the target data.

2. The file reading method based on the mapping technology as claimed in claim 1, further comprising:

s21, if the target data does not exist in the preset mapping area and the residual space of the preset mapping area is not smaller than the memory space occupied by the target data, performing the preset processing on the target data to obtain original data corresponding to the target data;

s22, calculating a target initial offset according to the initial offset and the memory of the last data in a preset target queue, wherein the target queue is used for recording the index of each original data in the preset mapping area, the initial offset of each original data and the memory space occupied by each original data;

and S23, writing the index of the target data, the target data and the memory space occupied by the target data into the corresponding position of the preset mapping area according to the target starting offset.

3. The method for reading a file based on a mapping technique according to claim 2, wherein writing the index of the target data, and the memory space occupied by the target data into the corresponding position of the preset mapping area according to the target start offset includes:

taking the target starting offset as a starting point of the corresponding position;

taking the memory space occupied by the target data as the size of the corresponding position;

and writing the index of the target data, the target data and the memory space occupied by the target data into the corresponding position through a write function.

4. The file reading method based on the mapping technology as claimed in claim 2, further comprising:

s31, if the target data does not exist in the preset mapping area and the residual space of the preset mapping area is smaller than the memory space occupied by the target data, removing a plurality of data in the target queue to obtain the removed target queue;

s32, starting at the position where the preset mapping area offset is 0, rewriting all the original data in the cleared target queue, and repeating steps S21 to S23 until all the original data in the cleared target queue are written in the preset mapping area.

5. The method for reading the file based on the mapping technology according to any one of claims 1 to 4, wherein the total memory space occupied by the preset mapping area is obtained by the following steps:

sending a request instruction to a configuration server to obtain the theoretical size of a preset mapping area responded by the configuration server;

and obtaining the actual size of the preset mapping area according to the theoretical size of the preset mapping area and the size of the preset memory page.

6. The file reading method based on the mapping technology according to any one of claims 1 to 4, further comprising: and converting the target data into a base64 character string, and transferring the target data to an H5 page through JSB edge for displaying.

7. The mapping-technology-based file reading method according to any one of claims 1 to 4, wherein the preset mapping area is used for storing original data in advance, and is implemented by:

establishing a mapping relation between the preset mapping area and the cache file through an MMAP function;

and performing the preset processing on the target data in the cache file, and storing the original data corresponding to the target data in the corresponding position of the preset mapping area.

8. A file reading system based on mapping technology, comprising:

the device comprises a receiving module, a judging module and a judging module, wherein the receiving module is used for acquiring an index corresponding to target data if a data reading instruction for reading the target data is received;

the judging module is used for judging whether the target data exists in a preset mapping area according to the index corresponding to the target data, wherein the preset mapping area is used for storing original data in a preset cache file in a memory in advance;

the processing module is used for reading original data corresponding to the target data from the preset mapping area if the target data exists in the preset mapping area;

and the reading module is used for performing reverse processing of the preset processing based on the original data corresponding to the target data so as to read the target data.

9. A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the mapping technique based file reading method according to any one of claims 1 to 7 when executing the computer program.

10. A computer storage medium storing a computer program, wherein the computer program, when executed by a processor, implements the steps of the mapping technique-based file reading method according to any one of claims 1 to 7.

Technical Field

The invention relates to the technical field of computers, in particular to a file reading method and system based on a mapping technology.

Background

In the prior art, when the APP opens and reads file data in the mobile phone file system in a conventional manner, the following two steps are generally required:

1. an index node (called inode for short) of the target file in the APP process file character table is found by the kernel of the operating system, and the kernel reads the file from the hard disk to the memory buffer area through the file information recorded in the index node.

2. And after the file data is copied from the memory buffer area to the user space again, the file data can be operated. If the file is a picture and needs to be displayed on a page, in addition to the above steps, additional decoding, byte alignment, and other processes are required to render the file on the page by the GPU.

Therefore, if the APP needs to frequently load file data in the file system, the data in the file system needs to be continuously read into the buffer area and then copied to the user space, which is a very time-consuming process, which can cause the performance bottleneck of the APP, such as the reduction of FPS, and the phenomenon of pause occurs, thereby directly causing the reduction of user experience.

Disclosure of Invention

The invention provides a file reading method and a file reading system based on a mapping technology, and mainly aims to save data reading time and effectively improve data processing efficiency.

In a first aspect, an embodiment of the present invention provides a file reading method based on a mapping technique, including:

s1, if a data reading instruction for reading target data is received, acquiring an index corresponding to the target data;

s2, judging whether the target data exist in a preset mapping area according to the index corresponding to the target data, wherein the preset mapping area is the original data in the memory for pre-storing the preset cache file;

s3, if the target data exists in the preset mapping area, reading original data corresponding to the target data from the preset mapping area;

and S4, performing reverse processing of the preset processing based on the original data corresponding to the target data, thereby reading the target data.

Preferably, the method further comprises the following steps:

s21, if the target data does not exist in the preset mapping area and the residual space of the preset mapping area is not smaller than the memory space occupied by the target data, performing the preset processing on the target data to obtain original data corresponding to the target data;

s22, calculating a target initial offset according to the initial offset and the memory of the last data in a preset target queue, wherein the target queue is used for recording the index of each original data in the preset mapping area, the initial offset of each original data and the memory space occupied by each original data;

and S23, writing the index of the target data, the target data and the memory space occupied by the target data into the corresponding position of the preset mapping area according to the target starting offset.

Preferably, the writing the index of the target data, and the memory space occupied by the target data into the corresponding position of the preset mapping area according to the target start offset includes:

taking the target starting offset as a starting point of the corresponding position;

taking the memory space occupied by the target data as the size of the corresponding position;

and writing the index of the target data, the target data and the memory space occupied by the target data into the corresponding position through a write function.

Preferably, the method further comprises the following steps: and writing the index of the target data, the target starting offset and the memory space occupied by the target data into the target queue and the cache file.

Preferably, the method further comprises the following steps:

s31, if the target data does not exist in the preset mapping area and the residual space of the preset mapping area is smaller than the memory space occupied by the target data, removing a plurality of data in the target queue to obtain the removed target queue;

s32, starting at the position where the preset mapping area offset is 0, rewriting all the original data in the cleared target queue, and repeating steps S21 to S23 until all the original data in the cleared target queue are written in the preset mapping area.

Preferably, the total memory space occupied by the preset mapping area is obtained by the following steps:

sending a request instruction to a configuration server to obtain the theoretical size of a preset mapping area responded by the configuration server;

and obtaining the actual size of the preset mapping area according to the theoretical size of the preset mapping area and the size of the preset memory page.

Preferably, the method further comprises the following steps: and converting the target data into a base64 character string, and transferring the target data to an H5 page through JSB edge for displaying.

Preferably, the preset mapping area is used for pre-storing original data, and is implemented by the following method:

establishing a mapping relation between the preset mapping area and the cache file through an MMAP function;

and performing the preset processing on the target data in the cache file, and storing the original data corresponding to the target data in the corresponding position of the preset mapping area.

In a second aspect, an embodiment of the present invention provides a file reading system based on a mapping technique, including:

the device comprises a receiving module, a judging module and a judging module, wherein the receiving module is used for acquiring an index corresponding to target data if a data reading instruction for reading the target data is received;

the judging module is used for judging whether the target data exists in a preset mapping area according to the index corresponding to the target data, wherein the preset mapping area is used for storing original data in a preset cache file in a memory in advance;

the processing module is used for reading original data corresponding to the target data from the preset mapping area if the target data exists in the preset mapping area;

and the reading module is used for performing reverse processing of the preset processing based on the original data corresponding to the target data so as to read the target data.

In a third aspect, an embodiment of the present invention provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the above-mentioned file reading method based on the mapping technique when executing the computer program.

In a fourth aspect, an embodiment of the present invention provides a computer storage medium, where a computer program is stored, and the computer program, when executed by a processor, implements the steps of the file reading method based on the mapping technique.

According to the file reading method and system based on the mapping technology, the preset mapping area is established, the corresponding relation between the preset mapping area and the data in the cache file is established, when the data needs to be read, the data can be directly searched in the preset mapping area, the preprocessing operation when the data is stored in the memory for the first time is saved, the data reading time is saved, and the performance of software is improved.

Drawings

Fig. 1 is an application scenario diagram of a file reading method based on a mapping technique according to an embodiment of the present invention;

fig. 2 is a flowchart of a file reading method based on a mapping technique according to an embodiment of the present invention;

fig. 3 is a schematic structural diagram of a file reading system based on a mapping technique according to an embodiment of the present invention;

fig. 4 is a schematic structural diagram of a computer device according to an embodiment of the present invention.

The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.

Detailed Description

It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.

Fig. 1 is an application scenario diagram of a file reading method based on a mapping technique according to an embodiment of the present invention, as shown in fig. 1, a client obtains a data reading instruction and sends the data reading instruction to a server, and after receiving the data reading instruction, the server executes the file reading method based on the mapping technique, and finally reads target data.

It should be noted that the server may be implemented by an independent server or a server cluster composed of a plurality of servers. The client may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, and the like. The client and the server may be connected through bluetooth, USB (Universal Serial Bus), or other communication connection manners, which is not limited in this embodiment of the present invention.

Fig. 2 is a flowchart of a file reading method based on a mapping technique according to an embodiment of the present invention, and as shown in fig. 2, the method includes:

the execution main body of the invention is a server, in particular to certain software in the server.

S1, if a data reading instruction for reading target data is received, acquiring an index corresponding to the target data;

the method comprises the steps that a user triggers software or a webpage to generate a data reading instruction in the process of using certain software or browsing the webpage, the data reading instruction is sent to a server, and after the server receives the data reading instruction, an index corresponding to target data is analyzed according to the data reading instruction.

It should be noted that the target data may be any characters, pictures, sounds, or the like, and the embodiment of the present invention is not specifically limited herein, and is specifically determined according to actual needs.

The target data may be data in a cache file, or data browsed for the first time when browsing a web page, and if the target data is data in the cache file, an index corresponding to the target data is generally determined according to a path of the target data in a file system; and if the data is browsed for the first time and the cache file does not exist, determining the index corresponding to the target data according to the download path of the target data. The indexes corresponding to different data are different, that is, the indexes are unique.

It should be further noted that, since data reading from the memory is much faster than data reading from the hard disk, creating a space in the memory to create a cache file, where the cache file is used to store data or instructions commonly used by the software, and when the data or instructions are needed, the data or instructions are directly read from the cache, which is much faster than directly reading data from the memory or the hard disk.

It should also be noted that an index is a single, physical storage structure that orders one or more columns of values in a database table, and is a collection of one or more columns of values in a table, i.e., a corresponding list of logical pointers that physically identify the data pages in the table. The index is equivalent to the directory of the book, and the required content can be quickly found according to the page number in the directory. In the embodiment of the invention, the MD5 value is calculated according to the path of the data in the file system, and the value is used as the index of the data.

S2, judging whether the target data exist in a preset mapping area according to the index corresponding to the target data, wherein the preset mapping area is the original data in the memory for pre-storing the preset cache file;

and searching in a preset mapping area according to the index corresponding to the target data, wherein the specific searching mode can be that searching is performed in all stored indexes in the preset mapping area to see whether the index corresponding to the target data can be found, and if the corresponding index can be found, the original data corresponding to the target data is recorded in the preset mapping area.

Specifically, the preset mapping area is an area in the memory, and when software or a browser is opened and operated, initialization is performed first, the system allocates a memory area as the preset mapping area of the cache file, the preset mapping area is used for synchronously recording data in the cache file, and when the data in the cache file needs to be read, the corresponding data can be searched in the preset mapping area for reading, and the data in the cache file does not need to be copied to the buffer area again.

The data is stored in the preset mapping area for the first time, if the data is a picture, the data needs to be decoded, byte aligned and the like firstly, and then the data is stored in the preset mapping area, therefore, if the target data is a picture, the picture which is decoded and byte aligned is stored in the preset mapping area, namely the original data, at the moment, the original data is the picture which is decoded and byte aligned by the target data, and the preset processing is the decoding and byte alignment operation; if the data is digital, the data may be directly stored in the preset mapping area without performing additional operations such as encoding and decoding or byte alignment, and the original data is the digital data, that is, the original data is the same as the target data, and the original data is the target data, and the preset processing is the non-processing.

S3, if the target data exists in the preset mapping area, reading original data corresponding to the target data from the preset mapping area;

if the original data corresponding to the target data can be found in the preset mapping area, the original data corresponding to the target data is directly read in the preset mapping area, and the preset processing operations such as coding and decoding or byte alignment and the like on the target data in the prior art are not needed, so that the processed data is stored and then read out, the time for data preset processing is saved, the data reading time is shortened, and the data reading efficiency is improved.

And S4, performing reverse processing of the preset processing based on the original data corresponding to the target data, thereby reading the target data.

Then, according to the original data corresponding to the target data, the reverse operation of the preset processing is performed, for example, if the picture is stored in the memory and needs to be subjected to the preprocessing of compression and encoding, the original data corresponding to the target data is read from the preset mapping area, then the decoding and decompression processing are performed, and then the target data can be restored, and then the subsequent processing is performed as needed.

The embodiment of the invention provides a file reading method based on a mapping technology, which is characterized in that a preset mapping area is established, and a corresponding relation between the preset mapping area and data in a cache file is established, when the data needs to be read, the data is directly searched in the preset mapping area, and preprocessing operation when the data is stored in a memory for the first time is omitted, so that the data reading time is shortened, the data reading efficiency is improved, and the software performance is improved.

On the basis of the above embodiment, it is preferable to further include:

s21, if the target data does not exist in the preset mapping area and the residual space of the preset mapping area is not smaller than the memory space occupied by the target data, performing the preset processing on the target data to obtain original data corresponding to the target data;

if the target data is browsed on the webpage for the first time, the target data is certainly not recorded in the preset mapping area, that is, an index matched with the index of the target data cannot be found in the preset mapping area, and through calculation, the remaining space of the preset mapping area is not smaller than the memory space occupied by the target data, that is, enough space exists in the preset mapping area to write the target data, and in this case, the target data needs to be written into the preset mapping area.

Firstly, whether enough space exists in the preset mapping area for writing the target data is determined according to the size relation between the residual space of the preset mapping area and the memory space occupied by the target data.

The residual memory space of the preset mapping area is equal to the memory space occupied by the preset mapping area minus the space occupied by all data in the preset mapping area, the memory space occupied by the preset mapping area can be regarded as preset, the space occupied by all data in the preset mapping area is the sum of the space occupied by each data, and the residual memory space of the preset mapping area can be obtained by subtracting the space occupied by all data.

When software is initially operated and initialized, a target queue is established, the target queue is used for recording all data in a preset mapping area, each data is used as an MAP of the target queue, the MAP records an index of each data, a starting offset of the data in the preset mapping area and a memory space occupied by the data, and the starting offset can be regarded as a storage position of the data in the preset mapping area.

And if the residual space of the preset mapping area is judged to be larger than or equal to the memory space occupied by the target data, performing the subsequent steps.

In a specific implementation process, when the target data is a picture, the method for calculating the memory space occupied by the target picture comprises the following steps: decoding a target picture into a bitmap, aligning according to 64 bytes during decoding, wherein the aligned size is the number of pixels of each row of the picture multiplied by the number of bytes of each pixel, so that the size of the bytes occupied by the pixels of each row of the picture can be obtained, then calculating the least common multiple of the product result and 64, multiplying the least common multiple by the height of the picture, dividing the result by the size of a preset memory page, and rounding the result upwards, so that the memory space occupied by the picture is obtained.

S22, calculating a target initial offset according to the initial offset and the memory of the last data in a preset target queue, wherein the target queue is used for recording the index of each original data in the preset mapping area, the initial offset of each original data and the memory space occupied by each original data;

according to the initial offset of the last data in the target queue and the size of the memory space occupied by the last data, wherein the last data is the data arranged at the last position, the memory refers to the size of the memory space occupied by the last data, the target initial offset of the target data in the preset mapping area can be obtained by adding the initial offset of the last data to the size of the memory space occupied by the last data, and the target initial offset gives the number of the page number of the target data which should be stored in the preset mapping area, namely the initial position of the target data in the preset mapping area.

And S23, writing the index of the target data, the target data and the memory space occupied by the target data into the corresponding position of the preset mapping area according to the target starting offset.

And then, according to the target initial offset, writing the index of the target data, the target data and the memory space occupied by the target data into the relevant position of the preset mapping area, so that the target data can be directly read from the preset mapping area when being read subsequently, and the purpose of saving time is achieved.

The embodiment of the invention writes the target data into the target queue, can ensure the consistency of the target queue and the preset mapping area, and can shorten the time for searching the target data in the preset mapping area through the record of the target queue.

On the basis of the foregoing embodiment, preferably, the writing, according to the target start offset, the index of the target data, and a memory space occupied by the target data into a corresponding position of the preset mapping area includes:

taking the target starting offset as a starting point of the corresponding position;

taking the memory space occupied by the target data as the size of the corresponding position;

and writing the index of the target data, the target data and the memory space occupied by the target data into the corresponding position through a write function.

In the embodiment of the invention, the index of the target data, the target data and the memory space occupied by the target data are written into the corresponding position in the memory space through a write () write function.

On the basis of the above embodiment, it is preferable to further include: and writing the index of the target data, the target starting offset and the memory space occupied by the target data into the target queue and the cache file.

After the target data is written into the preset mapping area, the index of the target data, the target initial offset of the target data, and the memory space occupied by the target data are also required to be written into the target queue, so as to ensure the consistency between the target queue and the preset mapping area. The target data are written into the target queue, the consistency of the target queue and the preset mapping area can be ensured, and the time for searching the target data in the preset mapping area can be shortened through the record of the target queue.

Meanwhile, the target data is written in the cache file, the cache file exists in a file system, the cache file always exists no matter whether software is opened or closed, the preset mapping area exists in a memory, when the software is closed, the memory also disappears, and when the software is opened next time, the preset mapping area is reestablished in the memory in the initialization process, the data in the cache file is copied to the preset mapping area, and preprocessing operation can be avoided when reading operation is subsequently performed on the cache file.

On the basis of the above embodiment, it is preferable to further include:

s31, if the target data does not exist in the preset mapping area and the residual space of the preset mapping area is smaller than the memory space occupied by the target data, removing a plurality of data in the target queue to obtain the removed target queue;

specifically, if the index corresponding to the target data cannot be found in the indexes of the preset mapping area, that is, the target data does not exist in the preset mapping area, and the remaining space of the preset mapping area is smaller than the memory space occupied by the target data, which indicates that the remaining space of the preset mapping area is not enough to write the target data, the preset mapping area needs to be cleared.

In a specific operation, data in the first half of the queue is cleared first, in the embodiment of the present invention, data in the first 50% of the queue is cleared, and then the cleared target queue is obtained.

S32, starting at the position where the preset mapping area offset is 0, rewriting all the original data in the cleared target queue, and repeating steps S21 to S41 until all the original data in the cleared target queue are written in the preset mapping area.

And then all the data in the cleared target queue is rewritten at the position with the preset mapping area offset of 0, and similarly, during writing, the index of the data, the data and the starting offset of the data are all written into the preset mapping area, and then the steps S21 to S23 are repeated, and all the data in the cleared target queue are written into the preset mapping area.

On the basis of the foregoing embodiment, preferably, the total memory space occupied by the preset mapping area is obtained by:

sending a request instruction to a configuration server to obtain the theoretical size of a preset mapping area responded by the configuration server;

sending a request instruction to a configuration server, and receiving a preset mapping region size upper limit value sent by the configuration server, wherein the preset mapping region size upper limit value is a preset mapping region theoretical size, and the preset mapping region size upper limit value generally does not exceed 10M and is preset in the configuration server.

And obtaining the actual size of the preset mapping area according to the theoretical size of the preset mapping area and the size of the preset memory page.

And after receiving the upper limit value of the size of the preset mapping area, the user terminal divides the upper limit value by the size of the preset memory page, and rounds the obtained result downwards to be used as the actual size of the preset mapping area.

Here, the actual size of the preset mapping area is calculated because the minimum storage granularity of the memory is a page, and the subsequently established MMAP mapping area is also used for reading and writing data in a page unit, so the actual size of the preset mapping area must be an integer multiple of the page size of the memory.

On the basis of the above embodiment, it is preferable to further include: and converting the target data into a base64 character string, and transferring the target data to an H5 page through JSB edge for displaying.

In the specific implementation process, if the development mode adopted by the terminal page or the browsed webpage is Native, namely a Native development mode, the Native refers to a page and logic written by iOS, android Native programming language Objective-C, Swift and Java in software, the page and the logic belong to two parts, the page belongs to a view display part code, and the logic comprises codes of each part such as data calculation, business requirements and the like; if the development mode adopted by the terminal page or the browsed webpage is H5, H5 refers to an HTML page and Javascript code logic carried by WebView in software, when the development mode is H5, JSbridge needs to be established in the software, and H5 communicates with Native through the JSbridge.

If the H5 page needs to display the target data, Native acquires the target data by the file reading method based on the mapping technology, converts the acquired target data into a base64 character string, and transmits the base64 character string to the H5 page through JSB bridge for display.

On the basis of the foregoing embodiment, preferably, the preset mapping area is used for storing original data in advance, and is implemented by the following manner:

establishing a mapping relation between the preset mapping area and the cache file through an MMAP function;

and performing the preset processing on the target data in the cache file, and storing the original data corresponding to the target data in the corresponding position of the preset mapping area.

Specifically, an actual mapping relationship between the preset mapping region and the cache file is established through the MMAP function.

The MMAP function is a method for mapping files in a memory, namely, a file or other objects are mapped to an address space of a process, and the one-to-one mapping relation between a file disk address and a section of virtual address in the virtual address space of the process is realized. After the mapping relation is realized, the process can read and write the memory section by using a pointer mode, and the system can automatically write back the dirty page to the corresponding file disk, namely, the operation on the file is completed without calling system calling functions such as read, write and the like. On the contrary, the modification of the kernel space to the region also directly reflects the user space, so that the file sharing among different processes can be realized.

A preferred embodiment of the present invention provides a file reading method based on a mapping technique, including the steps of:

(1) sending a request instruction to the configuration server, acquiring a theoretical size of a preset mapping area responded by the configuration server, for example, 5MB, dividing the theoretical size of the preset mapping area by the size of a preset memory page, and rounding the result to obtain the actual size of the preset mapping area.

(2) And establishing a mapping relation between a preset mapping area and the cache file through an MMAP function, performing preset processing on target data in the cache file, and storing original data corresponding to the target data in a corresponding position of the preset mapping area.

(3) And creating a target queue and a cache file, wherein the target queue and the cache file are used for recording all data in a preset mapping area, each data is used as an MAP of the target queue, and the MAP records the index of each data, the initial offset of the data in the preset mapping area and the memory space occupied by the data. This is the initialization.

(4) And if a data reading instruction is received, acquiring an index corresponding to the target data, and judging whether the preset mapping area has the target data or not according to the index corresponding to the target data.

(5) And if the preset mapping area has the target data, reading original data corresponding to the target data from the preset mapping area, and reading the target data according to the original data corresponding to the target data.

(6) If the target data does not exist in the preset mapping area and the residual space of the preset mapping area is not smaller than the memory space occupied by the target data, performing preset processing on the target data to obtain original data corresponding to the target data; acquiring a target initial offset according to the initial offset of the last data in the target queue and the size of the memory space occupied by the last data; and writing the index of the target data, the target data and the memory space occupied by the target data into the corresponding position of the preset mapping area according to the target initial offset.

(7) If the target data does not exist in the preset mapping area and the residual space of the preset mapping area is smaller than the memory space occupied by the target data, removing a plurality of data in the target queue to obtain the removed target queue; and (5) starting from the position with the preset mapping area offset of 0, rewriting all original data in the cleared target queue, and repeating the step (6) until all the original data in the cleared target queue are written into the preset mapping area.

(8) If the H5 page needs to display the target data, Native acquires the target data by the file reading method based on the mapping technology, converts the acquired target data into a base64 character string, and transmits the base64 character string to the H5 page through JSB bridge for display.

To sum up, the embodiment of the present invention provides a data reading method based on a mapping technique, which includes the following beneficial effects:

(1) by establishing the preset mapping area and establishing the corresponding relation between the preset mapping area and the data in the cache file, when the data needs to be read, the data is directly searched in the preset mapping area, so that the preprocessing operation of the data when the data is stored in the memory for the first time is saved, the data reading time is saved, and the performance of the software is improved.

(2) The target data are written into the target queue, the consistency of the target queue and the preset mapping area can be ensured, and the time for searching the target data in the preset mapping area can be shortened through the record of the target queue. And the target data is also written in the cache file, the cache file exists in a file system, the cache file always exists no matter software is opened or closed, the preset mapping area exists in a memory, when the software is closed, the memory also disappears, and when the software is opened next time, the preset mapping area is reestablished in the memory in the initialization process, the data in the cache file is copied to the preset mapping area, and preprocessing operation can be avoided when reading operation is subsequently performed on the cache file.

(3) If the target data is displayed on the H5 page, the target data is delivered to the H5 page through JSB bridge for showing. H5 and Native can share and use the cache data quickly, and H5 does not need to develop the cache function independently.

Fig. 3 is a schematic structural diagram of a file reading system based on a mapping technique according to an embodiment of the present invention, and as shown in fig. 3, the system includes a receiving module 310, a determining module 320, a processing module 330, and a reading module 340, where:

the receiving module 310 is configured to, if a data reading instruction for reading target data is received, obtain an index corresponding to the target data;

the determining module 320 is configured to determine whether a preset mapping area exists in the target data according to the index corresponding to the target data, where the preset mapping area is an original data in a memory for storing a preset cache file in advance;

the processing module 330 is configured to, if the target data exists in the preset mapping area, read original data corresponding to the target data from the preset mapping area;

the reading module 340 is configured to perform inverse processing of the preset processing based on original data corresponding to the target data, so as to read the target data.

On the basis of the above embodiment, it is preferable to further include: the device comprises a first judgment module, an offset calculation module and a first mapping module, wherein:

the first judging module is used for carrying out the preset processing on the target data to acquire original data corresponding to the target data if the target data does not exist in the preset mapping area and the residual space of the preset mapping area is not smaller than the memory space occupied by the target data;

the offset calculation module is used for calculating a target initial offset according to an initial offset and a memory of data arranged at the last position in a preset target queue, wherein the target queue is used for recording an index of each original data in the preset mapping area, the initial offset of each original data and a memory space occupied by each original data;

the first mapping module is used for writing the target data into the corresponding position of the preset mapping area according to the target starting offset.

On the basis of the above embodiment, it is preferable to further include: an indexing module, wherein:

the index module is used for writing the index of the target data, the target starting offset and the memory space occupied by the target data into the target queue and the cache file.

On the basis of the above embodiment, it is preferable to further include: second judge module, second mapping module, wherein:

the second judging module is used for clearing a plurality of data in the target queue and acquiring the cleared target queue if the target data does not exist in the preset mapping area and the residual space of the preset mapping area is smaller than the memory space occupied by the target data;

and the second mapping module is used for rewriting all the original data in the cleared target queue at the position where the offset of the preset mapping area is 0, and repeating until all the original data in the cleared target queue are written into the preset mapping area.

On the basis of the foregoing embodiment, preferably, in the determining module, the total memory space occupied by the preset mapping area is obtained by:

sending a request instruction to a configuration server to obtain the theoretical size of a preset mapping area responded by the configuration server;

and obtaining the actual size of the preset mapping area according to the theoretical size of the preset mapping area and the size of the preset memory page.

On the basis of the above embodiment, it is preferable to further include: a conversion presentation module, wherein:

and the conversion display module is used for converting the target data into a base64 character string and transmitting the string to an H5 page through JSB.

On the basis of the foregoing embodiment, preferably, in the determination module, the preset mapping area is used for storing original data in advance, and is implemented by the following manner:

establishing a mapping relation between the preset mapping area and the cache file through an MMAP function;

and performing the preset processing on the target data in the cache file, and storing the original data corresponding to the target data in the corresponding position of the preset mapping area.

This embodiment is a system embodiment corresponding to the file reading method embodiment based on the mapping technology, and an implementation process thereof is consistent with the method embodiment described above, and please refer to the method embodiment for details, which is not described herein again.

According to the file reading system based on the mapping technology, the preset mapping area is established, the corresponding relation between the preset mapping area and the data in the cache file is established, when the data needs to be read, the data can be directly searched in the preset mapping area, the preprocessing operation when the data is stored in the memory for the first time is saved, the data reading time is saved, and the software performance is improved.

The modules in the above-described mapping-based file reading system may be implemented in whole or in part by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.

In an embodiment, fig. 4 is a schematic structural diagram of a computer device according to an embodiment of the present invention, where the computer device may be a server, and its internal structural diagram may be as shown in fig. 4. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a computer storage medium and an internal memory. The computer storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the computer storage media. The database of the computer device is used for storing data generated or acquired during the execution of the file reading method based on the mapping technology. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a file reading method based on a mapping technique.

In one embodiment, a computer device is provided, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and when the processor executes the computer program, the steps of the mapping technology-based file reading method in the above embodiments are implemented. Alternatively, the processor, when executing the computer program, implements the functions of the modules/units in this embodiment of the mapping-based file reading system.

In an embodiment, a computer storage medium is provided, on which a computer program is stored, which when executed by a processor implements the steps of the mapping-technology-based file reading method in the above-described embodiments. Alternatively, the computer program realizes the functions of the modules/units in the embodiment of the file reading system based on the mapping technique described above when executed by the processor.

It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), synchronous link (Synchlink), DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).

It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions.

The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

16页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种用于收集工业链数据的互联网系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!