Spark-oriented cache replacement method and system based on data perception

文档序号:856855 发布日期:2021-04-02 浏览:9次 中文

阅读说明:本技术 一种面向Spark的基于数据感知的缓存替换方法及系统 (Spark-oriented cache replacement method and system based on data perception ) 是由 黄涛 钟华 魏峻 李慧 郑莹莹 唐震 许利杰 王伟 于 2020-12-22 设计创作,主要内容包括:本发明公开了一种面向Spark的基于数据感知的缓存替换方法,属于软件技术领域,通过分析Spark框架的应用数据依赖关系和历史执行信息,获取数据块的被依赖次数、占用内存空间大小、计算用时和被引用次数,基于这些因素建立权重模型,计算出数据块的权重;将数据块权重值从小到大排序,选择权重值较小数据且与待缓存的数据块不属于同一个RDD的数据块进行缓存替换。本发明针对现有应用负载特征多样性,应用对内存资源需求的持续变化性,动态感知用户负载特征,并根据历史运行信息,计算得到的权重值,衡量缓存替换最合适的数据,并实时结合当前内存资源情况作出替换决定,实现Spark框架缓存管理机制的优化。(The invention discloses a Spark-oriented cache replacement method based on data perception, which belongs to the technical field of software, and is characterized in that dependent times, occupied memory space size, calculation time and quoted times of a data block are obtained by analyzing the application data dependency relationship and historical execution information of a Spark frame, a weight model is established based on the factors, and the weight of the data block is calculated; and sorting the weighted values of the data blocks from small to large, and selecting the data blocks which have smaller weighted values and do not belong to the same RDD as the data block to be cached for cache replacement. Aiming at the diversity of the existing application load characteristics and the continuous variability of the application to the memory resource requirements, the invention dynamically senses the user load characteristics, calculates the obtained weighted value according to the historical running information, measures the most appropriate data for cache replacement, and makes a replacement decision in real time by combining the current memory resource condition, thereby realizing the optimization of the Spark framework cache management mechanism.)

1. A Spark-oriented cache replacement method based on data perception is characterized by comprising the following steps:

analyzing the application data dependency relationship and the historical execution information of the Spark framework, and acquiring the depended times, the occupied memory space size, the computing time and the referred times of the data block;

calculating the weight of the data block in the memory according to the depended times, the occupied memory space size, the calculation time and the referred times of the data block;

for a new data block to be cached, selecting a data block in a memory which does not belong to the same abstract elastic distributed data set RDD as the new data block for cache replacement, wherein the replacement step comprises the following steps:

selecting a data block in the memory with the smallest weight value and not 0 according to the weight value of the data block in the memory from small to large, releasing the data block in the memory, if the released memory space is smaller than the size of the occupied memory space of the new data block, continuing to release the data block in the next memory until the released memory space is larger than or equal to the size of the occupied memory space of the new data block, caching the new data block into the memory, and replacing the released data block in the memory;

and if the released memory space is still smaller than the size of the memory space occupied by the new data block after all the data blocks in the memory are released in sequence, giving up caching the new data block, and returning all the data blocks in the original memory to the memory.

2. The method of claim 1, wherein the dependent times, the size of the occupied memory space, the calculation time and the referenced times of the data block are obtained by a instrumentation method.

3. The method of claim 1, wherein the formula for calculating the weights of the blocks in memory is as follows:

among them, WeightiRepresents the weight, cost, of the ith data blockiRef indicating the time for calculation of the ith data blockiIndicating the number of dependences, size, of the ith data blockiIndicates the occupied memory space of the ith data block, passmodiIndicating the number of times the ith data block is referenced.

4. The method of claim 1, wherein for a data block whose computation time is longer, the data block whose computation time is longer is cached in the memory.

5. The method of claim 1, wherein for the number of times a data block is relied upon, if the data block is relied upon by only one job computation, no caching is performed; if the data block is dependent on two or more different job computations, caching is performed.

6. The method of claim 1, wherein the weight value of the data block in the memory is maintained by a weight table, and the initial value of the weight of the data block in the weight table is 0.

7. The method of claim 6, wherein the weight values are sorted from small to large in the weight table.

8. The method of claim 6, wherein after caching a new data block in the memory, calculating a weight value thereof, and updating a value in the weight table; after the data block in the memory is released, setting the value of the data block in the weight table to be 0; and after the data block in the original memory is returned to the memory, the weight of the data block is recovered.

9. The method as claimed in claim 1, wherein when releasing the data block in the memory, the data block in the memory is first taken out from the memory and temporarily stored in the waiting area list, and if the released memory space is greater than or equal to the size of the occupied memory space of the new data block, the data block in the memory is removed from the waiting area list.

10. A Spark-oriented data-aware-based cache replacement system, comprising:

the analyzer is used for analyzing the application data dependency relationship and the historical execution information of the Spark framework and acquiring the depended times, the occupied memory space size, the computing time and the referred times of the data block;

the controller is used for constructing a weight model of the data block according to the depended times, the occupied memory space size, the calculation time and the quoted times of the data block, and calculating the weight of the data block in the memory through the weight model;

the decision maker selects the data block in the memory which does not belong to the same abstract elastic distributed data set RDD as the replacement object with the new data block to be cached according to the weight value of the data block in the memory from small to large, and the step of deciding replacement comprises the following steps: selecting a data block in the memory with the smallest weight value and not 0 according to the weight value of the data block from small to large to decide release, if the released memory space is smaller than the size of the occupied memory space of the new data block, deciding to continue releasing the data block in the next memory until the released memory space is larger than or equal to the size of the occupied memory space of the new data block, deciding to cache the new data block to the memory, and replacing the released data block in the memory; if the released memory space is still smaller than the occupied memory space of the new data block after all the data blocks in the memory are released in sequence, determining to give up caching the new data block, and determining to return all the data blocks in the original memory to the memory;

and the executor is used for releasing or returning the data block in the memory according to the decision result of the decision maker and carrying out cache replacement operation on the new data block to be cached.

Technical Field

The invention relates to a Spark framework-oriented weight model cache replacement method and system based on data perception, and belongs to the technical field of software.

Background

With the increasing of mass data and the increasing complexity of services, people have increasing requirements on data processing. Compared with the early general parallel distributed computing framework (for example, Hadoop), the memory-based distributed computing framework Spark is very efficient in processing iterative computations, interactive data queries, parallel computations, and the like in the fields of machine learning, graph computation, and the like. Especially, large data processing platforms based on a memory often increase the application speed by caching effective data in the memory for reuse. When a CPU intensive application scenario is faced, too many created objects easily fill up a memory, which causes a gc (gas collection) problem, and a system searches for an object that is not used any more for recycling, thereby reducing the program execution performance. Meanwhile, when the cached partition data occupies the memory, the system enables cache replacement to make a replacement decision on the partition data: and eliminating the old data cache new data. Maintaining valuable data to be cached in the memory, and eliminating unnecessary data in time to release the memory space is one of the important means for improving the execution performance of the application program. The cache replacement strategy is rich and diverse, and the access mode of future data is predicted according to different historical information (time or frequency). Lru (least recent used) indicates eviction based on least longest unaccessed cache data blocks; LRU is a common cache replacement algorithm, and is widely used in system design and also applied to various computing frameworks or platforms. The core idea of the LRU algorithm in the Spark computing framework is to flush the least recently used data from the cache space to increase the available cache space to cache new data if a cache miss occurs again when the cache space is full. The LRU only considers the time factor of the data access in the memory, and eliminates the data which is not accessed for a long time. The effectiveness of the cached data may need to take into account more factors than the single factor of the time accessed.

Spark provides an abstract elastic distributed data set RDD and records the dependency between these different RDDs through a lineage graph (linkage graph). The user specific application logic is represented as a series of RDD conversion processes of data. The RDD dependency of the user application itself, and other historical execution information, etc. may be dynamically obtained. Other conventional cache replacement strategies such as lfu (least Frequency used) cache are eliminated based on the fact that data blocks are not accessed at the least Frequency, and only the single factor of the accessed Frequency is considered when the validity of cache data is determined; the LRFU (L.Donghee et al, "LRFU: a spectrum of policies by both sub-services used and least frequency used policies," IEEE Transactions on Computers, vol.50, No.12, pp.1352-1361,2001) algorithm takes into account the computational cost and size of the data block, but does not take into account the number of times the data block is used. Cache replacement strategies according to a single dimension (time or frequency or number of uses) are difficult to adapt to various workload scenarios. Through the analysis, the advantages of effective cache data selected by LFU, LRFU and other mechanisms for making replacement decision are not obvious.

The load characteristics of large data applications are often dynamic, and in the case of complex applications, especially in limited memory resources, factors such as partition data calculation cost, data dependency, partition dependency times and the like are not considered by a cache replacement policy LRU integrated in a Spark framework, so that it is difficult to select reasonable data persistence in a memory for reuse in subsequent job calculation.

The user program is submitted to the cluster, the application execution is shown as processing data on each node, and many distributed applications repeatedly perform the same operation steps on different data, and by combining with the programming characteristics of Spark, reasonable data can be evaluated in a memory based on a pedigree graph generated before execution so as to obtain better performance. Therefore, due to the dynamic change characteristic of the application load and the special programming characteristic of Spark, the Spark distributed computing framework is oriented to design an efficient cache replacement method and system, which is very important and has technical realizability.

Disclosure of Invention

The invention provides a Spark-oriented cache replacement method and system based on data perception, aiming at overcoming the defects of the prior art, wherein a weight model is established through influence factors such as calculation time, occupied memory space size, dependence times, used times and the like of a data block, the cache importance degree of the data block is qualitatively measured according to the weight value, the larger the weight value is, the larger the cache significance is, and the data block with the smaller weight value is more likely to be replaced.

In order to achieve the purpose, the technical scheme of the invention is as follows:

a Spark-oriented cache replacement method based on data perception comprises the following steps:

analyzing the application data dependency relationship and the historical execution information of the Spark framework, and acquiring the depended times, the occupied memory space size, the computing time and the referred times of the data block;

calculating the weight of the data block in the memory according to the depended times, the occupied memory space size, the calculation time and the referred times of the data block;

for a new data block to be cached, selecting a data block in a memory which does not belong to the same abstract elastic distributed data set RDD as the new data block for cache replacement, wherein the replacement step comprises the following steps:

selecting a data block in the memory with the smallest weight value and not 0 according to the weight value of the data block in the memory from small to large, releasing the data block in the memory, if the released memory space is smaller than the size of the occupied memory space of the new data block, continuing to release the data block in the next memory until the released memory space is larger than or equal to the size of the occupied memory space of the new data block, caching the new data block into the memory, and replacing the released data block in the memory;

and if the released memory space is still smaller than the size of the memory space occupied by the new data block after all the data blocks in the memory are released in sequence, giving up caching the new data block, and returning all the data blocks in the original memory to the memory.

Further, the dependent times, the occupied memory space size, the calculation time and the referred times of the data blocks are obtained from the directed acyclic graph of the Spark frame.

Furthermore, the dependent times, the occupied memory space size, the calculation time and the referred times of the data block are obtained through an instrumentation method.

Further, when calculating the data blocks, the data blocks which are calculated longer are cached in the memory.

Further, as for the number of times of the data block being depended on, if the data block is depended on by only one job calculation, no caching is performed; if the data block is dependent on two or more different job computations, caching is performed.

Furthermore, a weight table is used to maintain the weight value of the data block in the memory, and the initial value of the weight of the data block in the weight table is 0.

Further, after caching the new data block in the memory, calculating the weight value of the new data block, and updating the value in the weight table; after the data block in the memory is released, setting the value of the data block in the weight table to be 0; and after the data block in the original memory is returned to the memory, the weight of the data block is recovered.

Further, the weight values in the weight table are sorted from small to large.

Further, when releasing a plurality of data blocks in the memory, the data blocks in the memory belong to the same RDD or belong to different RDDs.

Further, when releasing the data block in the memory, the data block in the memory is firstly taken out of the memory and temporarily stored in the waiting area list, and if the released memory space is larger than or equal to the size of the memory space occupied by the new data block, the data block in the memory is removed from the waiting area list.

A Spark-oriented data-aware-based cache replacement system, comprising:

the analyzer is used for analyzing the application data dependency relationship and the historical execution information of the Spark framework and acquiring the depended times, the occupied memory space size, the computing time and the referred times of the data block;

the controller is used for constructing a weight model of the data block according to the depended times, the occupied memory space size, the calculation time and the quoted times of the data block, and calculating the weight of the data block in the memory through the weight model;

the decision maker selects the data block in the memory which does not belong to the same abstract elastic distributed data set RDD as the replacement object with the new data block to be cached according to the weight value of the data block in the memory from small to large, and the step of deciding replacement comprises the following steps: selecting a data block in the memory with the smallest weight value and not 0 according to the weight value of the data block from small to large to decide release, if the released memory space is smaller than the size of the occupied memory space of the new data block, deciding to continue releasing the data block in the next memory until the released memory space is larger than or equal to the size of the occupied memory space of the new data block, deciding to cache the new data block to the memory, and replacing the released data block in the memory; if the released memory space is still smaller than the occupied memory space of the new data block after all the data blocks in the memory are released in sequence, determining to give up caching the new data block, and determining to return all the data blocks in the original memory to the memory;

and the executor is used for releasing or returning the data block in the memory according to the decision result of the decision maker and carrying out cache replacement operation on the new data block to be cached.

Compared with the prior art, the invention has the advantages that: the invention dynamically senses the load characteristics of the user aiming at the diversity of the load characteristics of the existing application and even the continuous variability of the memory resource requirements of different stages of the same application, and measures the most appropriate data for cache replacement according to the historical running information and the weighted value obtained by calculation. The method and the system can effectively improve the utilization rate of memory resources, and provide a more efficient cache replacement method and a more efficient cache replacement system based on data perception by identifying more valuable partition data in the memory in the application execution process of a memory-based big data frame platform and combining with the Spark frame programming characteristic.

Drawings

In order to more clearly illustrate the working mode of the embodiment of the present invention and the technical solutions of the prior art, the following figures of the prior art of the present invention are briefly described:

FIG. 1 is a technical roadmap for a cache replacement implementation of the present invention;

FIG. 2 is a flow chart of cache replacement according to the present invention;

fig. 3 is a schematic diagram of cache replacement according to an embodiment of the present invention.

Detailed description of the invention

The technical scheme of the invention will be further explained in the following by combining the attached drawings of the invention.

The embodiment provides a Spark-oriented cache replacement method and system (LPW, Least Partition Weight) based on data perception, as shown in fig. 1-2, including the following steps:

step 1: the analyzer analyzes the application data dependency relationship and the historical execution information; in a distributed computing framework, data are distributed on each node, and data processing is converted into a task set of a Directed Acyclic Graph (Directed Acyclic Graph). The DAG describes the dependency relationship of data, and meanwhile, the historical execution information of the data block calculation is many, including the number of times the data block is depended on, the size of the memory space occupied by the data block, the calculation time of the data block, the number of times the data block is referred to, and the like. And acquiring data dependency and historical execution information by a instrumentation method.

Step 2: the controller establishes a weight model according to factors such as the number of times of being quoted, the size of occupied memory space, the time for calculation, the number of times of being quoted and the like of the cache data block, and calculates the weight of the data block;

to identify more reasonable and valuable data blocks and achieve the best replacement goals, Weight is usediTo represent the importance degree of the data block, the smaller the value is, the lower the importance degree is; when the memory is insufficient, the data block is possibly replaced from the memory preferentially as a more reasonable replacement object in the replacement decision. And constructing a weight model of the data block based on various factors influencing the calculation of the data block and the like. The formula is as follows:

Weightirepresenting the weight of the ith data block. The larger the weight is, the more valuable the data block is in the application execution process, and the data block is more likely to be cached in the memory as new data. costiWhen the ith data block is calculated, namely when the task (task) of the data block is calculated. ref (r) refiThe number of times of being depended on for the ith data block, namely the number of times of being depended on by the job calculation. sizeiThe size of the memory space occupied by the ith data block is obtained. past modiThe number of times of reference of the ith data block, that is, the number of times of reference in the completed job.

1) Timing of data block computation

If the data block P is calculatediTakes 1 hour, the data block P is calculatedjTakes 1 second, obviously PiAnd the cache is stored in the memory, so that the program execution efficiency is improved. In the whole process of submitting application and executing operation by a user, a data block which takes a long time for cache calculation is needed for improving performance. costiThe time of calculation of the ith data block is shown, that is, the time of calculation of the data block is processed. The larger the value, the higher the weight value of the data block to be cached.

2) Number of depended-on times of data block

If a certain data block riOnly one job is depended on, and the subsequent job is not depended on by the calculation of other jobs, so that the calculation is not required to be cached; a block of data is depended on by two different job computations, ri∈τp&ri∈τqIt is necessary to mix riCaching; thus, riBeing relied upon by multiple job computations, it needs to be cached. In the whole process of submitting application and executing operation by a user, data blocks with more operation dependence times need to be cached for improving performance. ref (r) refiIndicating for the ith data block the number of times the data block is dependent on the job calculation. The larger the value, the higher the weight value of the data block to be cached.

3) Size of memory space occupied by data block

The space size of the data block to be cached is not too large, the too large data may occupy a large amount of storage memory space to cause waste of memory resources, and even occupy execution memory to cause low efficiency of the application program due to insufficient computing memory. sizeiThe size of the memory space occupied by the ith data block is obtained. The larger the value, the less the weight value the block is buffered.

4) Number of times data block is referenced

pastmodiIndicating the number of times the ith data block was referenced in the completed job, which indicates that the data block that was used in the past may be used later. The larger the value, the higher the weight value of the data block to be cached.

And step 3: the decision maker quantitatively evaluates the cache value of the data block according to the calculated weight value of the data block, wherein the larger the weight value is, the higher the necessity of caching the data block is, namely the data block is kept in the memory; when the weight value is smaller, the necessity of caching the data block is lower, and when the memory is insufficient, the data block is preferentially replaced from the memory, and the sizes of the weight values are arranged in a descending order.

And 4, step 4: and the executor performs cache replacement operation on the data block according to the weight value.

Further, the specific implementation process of determining whether to swap out the data block cached in the memory is as follows: and constructing a Weight table Weight and a waiting area list waitingList, and maintaining the Weight of each data block, wherein the initial value is 0. Existing new data block PjkWhen the data block needs to be cached, whether the memory has enough space is firstly judged.

(ii) if there is sufficient space, directly combining PjkPut into the memory and update the Weight value (i.e. Weight (P)) of the data block in the Weight tablejk)). If there is not enough space, a cache replacement decision is started, detailed steps are as follows:

searching the weight table and the P according to the sequence of the weight values from small to bigjkData block P of different one RDDqw(i.e. theWeight(Pqw)≠0,The value of this data block is set to 0 and taken out and added to the list waitingList so that it is released to store a new data block PjkIf P is releasedqwIf there is enough space, P will beqwDelete waitingList and remove it from memory at the same time, update P in weight tablejkThe weight of (c).

Thirdly, if all the memory is matched with PjkAdding different data blocks belonging to one RDD into the waitingList, and not enough memory cache P exists after releasingjkThen abandon to the PjkAnd restoring the values of the weight table. Note that when a block is replaced, it is directly removed from the weight table.

The cache replacement of the present invention is verified as shown in FIG. 3 by way of example.

Apache Spark is a memory-based fast computing engine designed specifically for large-scale data processing, and the cache replacement policy in Spark framework is LRU. As shown in fig. 3, the initial value of the weight table is 0, and when the remaining memory of the node is larger than the memory size of the node to be cached, the data block is directly cached in the memory; when the residual memory of the node is not enough to cache the new data block, the cache replacement process is as follows: as shown in P in FIG. 3 (a)13Representing the new data to be cached, the memory space occupied by the data block is Sp13180M, node memory size Totalmem500M, the sequence P is already buffered in memory { P ═ P11,P12,P21,P22,P23Free, residual memorymem=Totalmem-SR1-SR2150M, Free found by calculationmem<Sp13There is not enough memory to store P13Traversing the partition data weight table, selecting and P13Data blocks P not belonging to the same RDD22Put into waitingList, set the weight value of the data block to 0, and eliminate P22Free, remaining memorymem=Totalmem-SR1-Sp21-Sp23+Sp22200M, it is sufficient to buffer P13P is converted into P by the technical scheme of the invention22Cache P by removing it from memory13Obtaining a new cache sequence P ═ P in the memory11,P12,P13,P21,P23Free, node residual memorymem=20M。

As shown in FIG. 3 (b), P13Representing the new data to be cached, the memory space occupied by the data block is Sp13400M, node memory size Totalmem500M, the sequence P is already buffered in memory { P ═ P11,P12,P21,P22,P23Free, residual memorymem=Totalmem-SR1-SR2150M, Free found by calculationmem<Sp13There is not enough memory to store P13Traversing the partition data weight table, selecting and P13P which is not 0 and has the smallest weight value and does not belong to the same data block of RDD22Put into waitingList, set the weight value of the data block to 0, and eliminate P22Free, remaining memorymem=Totalmem-SR1-Sp21-Sp23+Sp22200M, there is still not enough memory cache P13Continuously traversing the weight table to obtain P with the minimum weight and not 023,P21Sequentially removing the residual memory Freemem=Totalmem-SR1+Sp22+Sp23+Sp21350M, by calculating Freemem<Sp13It can be seen that there is not enough memory buffer P to release all partition data13Then abandon the pair P13And (4) recovering the values in the weight table.

In the above embodiments, some well-known techniques may not be described in detail for those skilled in the art. In addition, the data blocks are also called partition data blocks and partition data, and the meaning of the data blocks is not distinguished from that of the partition data blocks.

The above description of specific embodiments of the invention is intended to be illustrative, and the described embodiments are part of the invention and do not represent all embodiments. The scope of protection of the invention is set forth in the claims. Those skilled in the art can make variations and modifications in the invention without departing from the scope of the invention.

10页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种动态配置通道的总线互联系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类