Data caching method and device

文档序号:1543956 发布日期:2020-01-17 浏览:10次 中文

阅读说明:本技术 一种数据缓存方法及装置 (Data caching method and device ) 是由 杨方方 苗宇 何坤 叶晓虎 于 2019-09-26 设计创作,主要内容包括:本发明提供一种数据缓存方法及装置,用以解决现有技术中存在的数据缓存效率较低的问题。包括:获取用于指示从磁盘中读取一条数据所需的时长的第一读取成本,以及用于指示从预设内存中读取一条数据所需的时长的第二读取成本;获取磁盘上存储的多个磁盘文件中每个磁盘文件的第一空间占用量和访问次数;第一空间占用量用于表征磁盘文件中热点数据所占用的存储空间大小;根据第一读取成本、第二读取成本、每个磁盘文件的第一空间占用量和访问次数,确定每个磁盘文件的访问代价;根据每个磁盘文件的第一空间占用量和访问代价,在多个磁盘文件中确定出待缓存磁盘文件,并将待缓存磁盘文件中的热点数据缓存至预设内存。(The invention provides a data caching method and device, which are used for solving the problem of low data caching efficiency in the prior art. The method comprises the following steps: acquiring a first reading cost for indicating the time length required for reading a piece of data from a disk and a second reading cost for indicating the time length required for reading a piece of data from a preset memory; acquiring first space occupation amount and access times of each disk file in a plurality of disk files stored on a disk; the first space occupation amount is used for representing the size of a storage space occupied by hot point data in a disk file; determining the access cost of each disk file according to the first reading cost, the second reading cost, the first space occupation amount and the access times of each disk file; according to the first space occupation amount and the access cost of each disk file, disk files to be cached are determined from the multiple disk files, and hot spot data in the disk files to be cached are cached to a preset memory.)

1. A method for caching data, comprising:

acquiring a first reading cost for indicating the time length required for reading a piece of data from a disk and a second reading cost for indicating the time length required for reading a piece of data from a preset memory;

acquiring a first space occupation amount and access times of each disk file in a plurality of disk files stored on the disk; the first space occupation amount is used for representing the size of a storage space occupied by hot spot data in a disk file, and the hot spot data is data of which the read times are more than or equal to a first preset threshold value in the disk file;

determining the access cost of each disk file according to the first reading cost, the second reading cost, the first space occupation amount and the access times of each disk file;

and determining a disk file to be cached in the plurality of disk files according to the first space occupation amount and the access cost of each disk file, and caching the hot spot data in the disk file to be cached to the preset memory.

2. The method of claim 1, wherein determining the access cost of each disk file according to the first reading cost, the second reading cost, the first space occupation amount of each disk file and the access times comprises:

determining a first access cost of the first disk file according to the first reading cost, the first space occupation amount and the access times of the first disk file; the first disk file is any one of the multiple disk files, and the first access cost is used for representing the time length required for reading the hotspot data in the first disk file from the disk for multiple times;

determining a second access cost of the first disk file according to the first reading cost, the first writing cost, the second reading cost, the first space occupation amount of the first disk file and the access times; the first write cost is used for indicating the time length required for writing a piece of data into the preset memory, and the second access cost is used for representing the time length required for reading the hot data in the first disk file from the preset memory for multiple times;

and determining the difference value between the first access cost and the second access cost as the access cost of the first disk file.

3. The method of claim 2,

the first access cost is determined by:

Cuc(Ti)=Hr×RSize(Ti)×m(Ti);

RSize(Ti)=Count(Ti)×Size(Ti);

wherein, Cuc(Ti) Representing said first access cost, TiRepresenting said first disk file, Hr representing the time required to read a piece of data from said disk, RSize (T)i) Is a first space occupation amount, m (T), of the first disk filei) Count (T) is the number of accesses to the first disk filei) Size (T) being the number of hot-point data in the first disk filei) The size of the space occupied by each piece of data in the first disk file is averaged;

the second access cost is determined by:

Cc(Ti)=Cw(Ti)+m(Ti)×Cr(Ti);

Cw(Ti)=Hr×RSize(Ti)+CPUw×RSize(Ti);

Cr(Ti)=CPUr×RSize(Ti);

wherein, Cc(Ti) Representing said second access cost, Cw(Ti) Indicating a time length required for reading the hot data in the first disk file from the disk and writing the hot data in the first disk file into the preset memory, Cr(Ti) The CPU represents the time length required for reading the hot spot data in the first disk file from the preset memorywCPU for indicating the time required for writing a piece of data into the preset memoryrAnd the time length required for reading one piece of data from the preset memory is represented.

4. The method according to any one of claims 1 to 3, wherein the disk stores N disk files of which the first space occupation amount is smaller than the current available space size of the preset memory;

determining the disk file to be cached in the plurality of disk files according to the first space occupation amount and the access cost of each disk file, including:

determining K disk files in the N disk files as the disk files to be cached;

the sum of the occupied first space of the K disk files is smaller than the size of the current available space of the preset memory, and the access cost of any one disk file in the K disk files is larger than that of any one disk file in the N disk files except the K disk files.

5. The method according to any one of claims 1 to 3, wherein the determining the disk file to be cached in the plurality of disk files according to the first space occupation amount and the access cost of each disk file comprises:

determining an optimal disk file set according to the first space occupation amount and the access cost of each disk file in the disk files, wherein the optimal disk file set is the disk file set with the largest sum of the access costs in the disk file sets, each disk file set in the disk file sets is composed of at least one disk file in the disk files, and the sum of the first space occupation amounts of the disk files included in the optimal disk file set is smaller than or equal to the current available space size of the preset memory;

and determining the disk files contained in the optimal disk file set as the disk files to be cached.

6. A data caching apparatus, comprising:

the device comprises an acquisition module, a storage module and a processing module, wherein the acquisition module is used for acquiring a first reading cost used for indicating the time length required for reading a piece of data from a disk and a second reading cost used for indicating the time length required for reading a piece of data from a preset memory; acquiring a first space occupation amount and access times of each disk file in a plurality of disk files stored on the disk; the first space occupation amount is used for representing the size of a storage space occupied by hot spot data in a disk file, and the hot spot data is data of which the read times are more than or equal to a first preset threshold value in the disk file;

the determining module is used for determining the access cost of each disk file according to the first reading cost, the second reading cost, the first space occupation amount and the access times of each disk file;

and the caching module is used for determining a disk file to be cached in the plurality of disk files according to the first space occupation amount and the access cost of each disk file, and caching the hot spot data in the disk file to be cached to the preset memory.

7. The apparatus of claim 6, wherein the determination module is specifically configured to:

determining a first access cost of the first disk file according to the first reading cost, the first space occupation amount and the access times of the first disk file; the first disk file is any one of the multiple disk files, and the first access cost is used for representing the time length required for reading the hotspot data in the first disk file from the disk for multiple times;

determining a second access cost of the first disk file according to the first reading cost, the first writing cost, the second reading cost, the first space occupation amount of the first disk file and the access times; the first write cost is used for indicating the time length required for writing a piece of data into the preset memory, and the second access cost is used for representing the time length required for reading the hot data in the first disk file from the preset memory for multiple times;

and determining the difference value between the first access cost and the second access cost as the access cost of the first disk file.

8. The apparatus of claim 7, wherein the determining module is specifically configured to determine the first access cost by:

Cuc(Ti)=Hr×RSize(Ti)×m(Ti);

RSize(Ti)=Count(Ti)×Size(Ti);

wherein, Cuc(Ti) Representing said first access cost, TiRepresenting said first disk file, Hr representing the time required to read a piece of data from said disk, RSize (T)i) Is a first space occupation amount, m (T), of the first disk filei) Count (T) is the number of accesses to the first disk filei) Size (T) being the number of hot-point data in the first disk filei) The size of the space occupied by each piece of data in the first disk file is averaged;

the determining module is specifically configured to determine the second access cost by:

Cc(Ti)=Cw(Ti)+m(Ti)×Cr(Ti);

Cw(Ti)=Hr×RSize(Ti)+CPUw×RSize(Ti);

Cr(Ti)=CPUr×RSize(Ti);

wherein, Cc(Ti) Representing said second access cost, Cw(Ti) Indicating that the hot spot data in the first disk file is read from the disk and the first disk file is writtenThe duration, C, required for writing the hot data in the piece into the preset memoryr(Ti) The CPU represents the time length required for reading the hot spot data in the first disk file from the preset memorywCPU for indicating the time required for writing a piece of data into the preset memoryrAnd the time length required for reading one piece of data from the preset memory is represented.

9. The apparatus according to any one of claims 6 to 8, wherein the disk stores N disk files, the first space occupation amount of which is smaller than the current available space size of the preset memory;

the cache module is specifically configured to:

determining K disk files in the N disk files as the disk files to be cached;

the sum of the occupied first space of the K disk files is smaller than the size of the current available space of the preset memory, and the access cost of any one disk file in the K disk files is larger than that of any one disk file in the N disk files except the K disk files.

10. The apparatus according to any one of claims 6 to 8, wherein the cache module is specifically configured to:

determining an optimal disk file set according to the first space occupation amount and the access cost of each disk file in the disk files, wherein the optimal disk file set is the disk file set with the largest sum of the access costs in the disk file sets, each disk file set in the disk file sets is composed of at least one disk file in the disk files, and the sum of the first space occupation amounts of the disk files included in the optimal disk file set is smaller than or equal to the current available space size of the preset memory;

and determining the disk files contained in the optimal disk file set as the disk files to be cached.

11. A data caching apparatus, comprising:

a memory and a processor;

a memory for storing program instructions;

a processor for calling the program instructions stored in the memory and executing the method of any one of claims 1 to 5 according to the obtained program.

12. A computer-readable storage medium having stored thereon computer instructions which, when executed on a computer, cause the computer to perform the method of any one of claims 1 to 5.

Technical Field

The present invention relates to the field of data processing, and in particular, to a data caching method and apparatus.

Background

A large number of systems or devices generally store data on a disk, but with the development of the internet era and the increasing data, the traditional way of accessing the disk to obtain data has difficulty in meeting the processing speed requirements of users for the systems or devices.

At present, in order to meet the requirement of a user on the processing speed, a developer usually extracts some data from a disk manually and caches the data in a memory, but the requirement of the method on the developer is high, the developer needs to analyze the data value and know the space size of the memory, errors are prone to occurring, and the efficiency of data caching is not improved.

Disclosure of Invention

The invention provides a data caching method and device, which are used for solving the problem of low data caching efficiency in the prior art.

In a first aspect, an embodiment of the present invention provides a data caching method, including:

acquiring a first reading cost for indicating the time length required for reading a piece of data from a disk and a second reading cost for indicating the time length required for reading a piece of data from a preset memory;

acquiring a first space occupation amount and access times of each disk file in a plurality of disk files stored on the disk; the first space occupation amount is used for representing the size of a storage space occupied by hot spot data in a disk file, and the hot spot data is data of which the read times are more than or equal to a first preset threshold value in the disk file;

determining the access cost of each disk file according to the first reading cost, the second reading cost, the first space occupation amount and the access times of each disk file;

and determining a disk file to be cached in the plurality of disk files according to the first space occupation amount and the access cost of each disk file, and caching the hot spot data in the disk file to be cached to the preset memory.

In an optional implementation manner, the determining, according to the first reading cost, the second reading cost, the first space occupation amount of each disk file, and the number of accesses, the access cost of each disk file includes:

determining a first access cost of the first disk file according to the first reading cost, the first space occupation amount and the access times of the first disk file; the first disk file is any one of the multiple disk files, and the first access cost is used for representing the time length required for reading the hotspot data in the first disk file from the disk for multiple times;

determining a second access cost of the first disk file according to the first reading cost, the first writing cost, the second reading cost, the first space occupation amount of the first disk file and the access times; the first write cost is used for indicating the time length required for writing a piece of data into the preset memory, and the second access cost is used for representing the time length required for reading the hot data in the first disk file from the preset memory for multiple times;

and determining the difference value between the first access cost and the second access cost as the access cost of the first disk file.

In an alternative implementation form of the present invention,

the first access cost is determined by:

Cuc(Ti)=Hr×RSize(Ti)×m(Ti);

RSize(Ti)=Count(Ti)×Size(Ti);

wherein, Cuc(Ti) Representing said first access cost, TiRepresenting said first disk file, Hr representing the time required to read a piece of data from said disk, RSize (T)i) Is a first space occupation amount, m (T), of the first disk filei) Count (T) is the number of accesses to the first disk filei) Size (T) being the number of hot-point data in the first disk filei) Is that it isThe average occupied space of each piece of data in the first disk file is large;

the second access cost is determined by:

Cc(Ti)=Cw(Ti)+m(Ti)×Cr(Ti);

Cw(Ti)=Hr×RSize(Ti)+CPUw×RSize(Ti);

Cr(Ti)=CPUr×RSize(Ti);

wherein, Cc(Ti) Representing said second access cost, Cw(Ti) Indicating a time length required for reading the hot data in the first disk file from the disk and writing the hot data in the first disk file into the preset memory, Cr(Ti) The CPU represents the time length required for reading the hot spot data in the first disk file from the preset memorywCPU for indicating the time required for writing a piece of data into the preset memoryrAnd the time length required for reading one piece of data from the preset memory is represented.

In an optional implementation manner, N disk files of which the first space occupation amount is smaller than the current available space size of the preset memory are stored on the disk;

determining the disk file to be cached in the plurality of disk files according to the first space occupation amount and the access cost of each disk file, including:

determining K disk files in the N disk files as the disk files to be cached;

the sum of the occupied first space of the K disk files is smaller than the size of the current available space of the preset memory, and the access cost of any one disk file in the K disk files is larger than that of any one disk file in the N disk files except the K disk files.

In an optional implementation manner, the determining, according to the first space occupation amount and the access cost of each disk file, a disk file to be cached in the plurality of disk files includes:

determining an optimal disk file set according to the first space occupation amount and the access cost of each disk file in the disk files, wherein the optimal disk file set is the disk file set with the largest sum of the access costs in the disk file sets, each disk file set in the disk file sets is composed of at least one disk file in the disk files, and the sum of the first space occupation amounts of the disk files included in the optimal disk file set is smaller than or equal to the current available space size of the preset memory;

and determining the disk files contained in the optimal disk file set as the disk files to be cached.

In a second aspect, an embodiment of the present invention provides a data caching apparatus, including:

the device comprises an acquisition module, a storage module and a processing module, wherein the acquisition module is used for acquiring a first reading cost used for indicating the time length required for reading a piece of data from a disk and a second reading cost used for indicating the time length required for reading a piece of data from a preset memory; acquiring a first space occupation amount and access times of each disk file in a plurality of disk files stored on the disk; the first space occupation amount is used for representing the size of a storage space occupied by hot spot data in a disk file, and the hot spot data is data of which the read times are more than or equal to a first preset threshold value in the disk file;

the determining module is used for determining the access cost of each disk file according to the first reading cost, the second reading cost, the first space occupation amount and the access times of each disk file;

and the caching module is used for determining a disk file to be cached in the plurality of disk files according to the first space occupation amount and the access cost of each disk file, and caching the hot spot data in the disk file to be cached to the preset memory.

In an optional implementation manner, the determining module is specifically configured to:

determining a first access cost of the first disk file according to the first reading cost, the first space occupation amount and the access times of the first disk file; the first disk file is any one of the multiple disk files, and the first access cost is used for representing the time length required for reading the hotspot data in the first disk file from the disk for multiple times;

determining a second access cost of the first disk file according to the first reading cost, the first writing cost, the second reading cost, the first space occupation amount of the first disk file and the access times; the first write cost is used for indicating the time length required for writing a piece of data into the preset memory, and the second access cost is used for representing the time length required for reading the hot data in the first disk file from the preset memory for multiple times;

and determining the difference value between the first access cost and the second access cost as the access cost of the first disk file.

In an optional implementation manner, the determining module is specifically configured to determine the first access cost by:

Cuc(Ti)=Hr×RSize(Ti)×m(Ti);

RSize(Ti)=Count(Ti)×Size(Ti);

wherein, Cuc(Ti) Representing said first access cost, TiRepresenting said first disk file, Hr representing the time required to read a piece of data from said disk, RSize (T)i) Is a first space occupation amount, m (T), of the first disk filei) Count (T) is the number of accesses to the first disk filei) Size (T) being the number of hot-point data in the first disk filei) The size of the space occupied by each piece of data in the first disk file is averaged;

the determining module is specifically configured to determine the second access cost by:

Cc(Ti)=Cw(Ti)+m(Ti)×Cr(Ti);

Cw(Ti)=Hr×RSize(Ti)+CPUw×RSize(Ti);

Cr(Ti)=CPUr×RSize(Ti);

wherein, Cc(Ti) Representing said second access cost, Cw(Ti) Indicating a time length required for reading the hot data in the first disk file from the disk and writing the hot data in the first disk file into the preset memory, Cr(Ti) The CPU represents the time length required for reading the hot spot data in the first disk file from the preset memorywCPU for indicating the time required for writing a piece of data into the preset memoryrAnd the time length required for reading one piece of data from the preset memory is represented.

In an optional implementation manner, N disk files of which the first space occupation amount is smaller than the current available space size of the preset memory are stored on the disk;

the cache module is specifically configured to:

determining K disk files in the N disk files as the disk files to be cached;

the sum of the occupied first space of the K disk files is smaller than the size of the current available space of the preset memory, and the access cost of any one disk file in the K disk files is larger than that of any one disk file in the N disk files except the K disk files.

In an optional implementation manner, the cache module is specifically configured to:

determining an optimal disk file set according to the first space occupation amount and the access cost of each disk file in the disk files, wherein the optimal disk file set is the disk file set with the largest sum of the access costs in the disk file sets, each disk file set in the disk file sets is composed of at least one disk file in the disk files, and the sum of the first space occupation amounts of the disk files included in the optimal disk file set is smaller than or equal to the current available space size of the preset memory;

and determining the disk files contained in the optimal disk file set as the disk files to be cached.

In a third aspect, an embodiment of the present invention provides a data caching apparatus, including:

a memory and a processor;

a memory for storing program instructions;

and the processor is used for calling the program instructions stored in the memory and executing the method of any implementation mode of the first aspect according to the obtained program.

In a fourth aspect, the present invention provides a computer-readable storage medium storing computer instructions, which, when executed on a computer, cause the computer to perform the above method.

In the embodiment of the invention, a first reading cost for indicating the time length required for reading a piece of data from a disk and a second reading cost for indicating the time length required for reading a piece of data from a preset memory are obtained, and the access times of each disk file in a plurality of disk files stored on the disk and a first space occupation amount corresponding to the hot spot data of each disk file are obtained; then determining the access cost of each disk file according to the first reading cost, the second reading cost, the first space occupation amount and the access times of each disk file; and determining a disk file to be cached in the plurality of disk files according to the first space occupation amount and the access cost of each disk file, and caching the hot spot data in the disk file to be cached to the preset memory. Therefore, when the hot spot data in the disk file to be cached is read, the hot spot data can be directly obtained from the memory without accessing the disk, the data reading efficiency can be effectively improved, and the data processing speed is increased.

Drawings

Fig. 1 is a schematic flowchart of a data caching method according to an embodiment of the present invention;

fig. 2 is a schematic flowchart of determining a disk file to be cached according to an embodiment of the present invention;

fig. 3 is a block diagram of a data caching apparatus according to an embodiment of the present invention;

fig. 4 is a schematic structural diagram of another data caching apparatus according to an embodiment of the present invention.

Detailed Description

In order to make the objects, technical solutions and advantages of the present invention clearer, the present invention will be described in further detail with reference to the accompanying drawings, and it is apparent that the described embodiments are only a part of the embodiments of the present invention, not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.

The plurality of the present invention means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. In addition, it should be understood that although the terms first, second, etc. may be used to describe various data in embodiments of the present invention, these data should not be limited by these terms. These terms are only used to distinguish the data from each other.

The technical concept involved in the present invention will be described below before specific embodiments of the present invention are described.

(1) A Hadoop Distributed File System (HDFS) is a Distributed File System suitable for operating on general hardware, and stores File data in a Hadoop disk in a Distributed manner. The HDFS is mostly applied to a big data processing platform and is mainly responsible for storing large-scale data, supporting streaming read-write data and processing super-large-scale files.

(2) The distributed computing framework Spark is an efficient distributed computing framework capable of simultaneously performing batch processing, streaming computing and interactive computing. The system is provided with an open source cluster computing environment similar to Hadoop, can run in parallel in the HDFS as a supplement to the Hadoop, and is mainly responsible for operations such as data Query and data processing by using Structured Query Language (SQL). The Spark framework adopts an elastic distributed data set (RDD) to perform iterative computation based on the memory, which can improve the computation efficiency.

(3) Hive is a data warehouse software that uses SQL-like statements to assist in reading and writing, managing large data sets that are stored on a distributed storage system such as HDFS.

Under the background of big data era, the ecological circle of a big data processing platform is continuously expanded. A large number of systems or devices store data on an HDFS (magnetic disk), people can query and read the data of the HDFS by Hive through a traditional SQL statement, but reading the data from the HDFS is equivalent to reading the data on the magnetic disk, and the processing speed of the reading is far lower than that of a CPU (Central processing Unit). With the explosive increase of data volume, especially for a system which requires speed and focuses on user experience, the traditional query mode cannot meet the requirements of people. For example, more and more people are concerned about network security, and many products based on protection security appear like bamboo shoots in spring after rain. When an attack event is found, unknown more data are acquired through tracing, namely known data, so as to enhance the protection effect of network security, and in order to achieve the current purpose of tracing source afterwards, an original data source (Flow) in the network needs to be stored, so that the storage cost is increased undoubtedly, and the explosion of data brings huge challenges to the protection security technology.

In order to ensure data integrity and meet the user requirement of fast data query in the later period, Spark is widely applied to query and process data in the HDFS. Because Spark is high in efficiency based on memory calculation, Spark teams develop and design Spark SQL projects, and the dependency on Hive when HDFS data is inquired in the prior art is eliminated. Spark SQL provides a caching mechanism for HDFS data, which converts HDFS data into memory columnar storage in advance and caches the memory columnar storage in a memory, for example, in a scenario where a data source on an HDFS is non-columnar storage, Spark SQL reads data from the data source in rows, converts the data in formats of CSV, JSON, and the like into efficient columnar storage data such as ORC/partial, and caches the data in the memory. Then, when the query is based on Spark SQL, a memory column type reading mode can be directly adopted, only the required column is read, and the reading efficiency can be effectively improved.

Specifically, when the Spark processes the HDFS data, a read operation is first performed on the HDFS data, for example, a SQL statement is used to read the required data from the data table of the HDFS; then returning to the corresponding RDD, executing mapPartition operation on the RDD, and converting a result set corresponding to the read HDFS data into an array form; and then executing a cache () method, thereby achieving the purpose of caching the read HDFS data to a memory. When receiving a query request of a user, Spark determines column data corresponding to the query request in the memory according to the query request, analyzes fields in the column data by using a ColumnAccessor module, packages a row record format and returns the column data to the user, such as displaying the data on a visual interface interacting with the user in the row record format.

By utilizing the characteristic of high column-type reading access efficiency of the memory in Spark SQL, data is cached in the memory in a pre-reading mode before SQL is executed, so that network transmission overhead and disk I/O overhead can be reduced, and the processing time delay of interactive query is reduced; and by column type storage and compression coding, memory overhead can be reduced and access speed can be improved. In order to achieve the effect of high access efficiency achieved by reading data based on spark sql, the key problem is how to select data worth buffering in a limited memory space.

At present, in a default caching strategy of Spark SQL, a developer is required to determine which data needs to be cached by himself, the data is manually selected to be cached after being pulled in a manual declaration manner, and the developer determines which data is cached, which may cause low cache utilization rate due to error in data value analysis or cause abnormality in a calculation process. Developers are also required to be familiar with the business logic involved in SQL queries while being familiar with the capacity of each data table in the database to avoid caching invalid or inefficient data. The developer manually extracts some data from the disk to cache the data in the memory, so that the requirement on the developer is high, errors are easy to occur, and the efficiency of data caching and data reading is not favorably improved.

Accordingly, embodiments of the present invention provide a data caching method and apparatus, so as to solve the problem of low data caching efficiency in the prior art. The method and the device are based on the same inventive concept, and because the principles of solving the problems of the method and the device are similar, the implementation of the device and the method can be mutually referred, and repeated parts are not repeated.

Referring to fig. 1, a schematic flow chart of a data caching method according to an embodiment of the present invention is shown. The method comprises the following steps:

step S101, a first reading cost for indicating a time length required to read a piece of data from a disk and a second reading cost for indicating a time length required to read a piece of data from a preset memory are obtained.

The first reading cost may be a parameter preset according to the size of data stored in the disk, and the second reading cost includes a time cost for reading data from a memory and a time cost for packaging read column data into row data and recording the row data in a CPU.

19页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:基于固态硬盘的映射表重建方法、装置和计算机设备

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类