Cache replacement method based on maximized cache gain under Spark platform

文档序号:1378461 发布日期:2020-08-14 浏览:15次 中文

阅读说明:本技术 Spark平台下基于最大化缓存增益的缓存替换方法 (Cache replacement method based on maximized cache gain under Spark platform ) 是由 李春林 张梦莹 于 2020-03-25 设计创作,主要内容包括:本发明公开了一种Spark平台下基于最大化缓存增益的缓存替换方法,该方法首先通过对有向无环图中各种操作的依赖性进行分析,提出了一个用于衡量缓存收益的缓存增益模型,目标是使其缓存增益最大化。然后在作业到达率已知的情况下采用取整舍入方法对该受背包约束的最大化问题求得离线最优近似解。最后在作业到达率未知的情况下,采用投影梯度上升方法获取每个缓存项应放置在缓存中的概率,从而获得满足缓存增益最大化的在线自适应缓存替换策略。此方法充分利用了系统资源,缩短了Spark应用的执行时间,提高了缓存命中率。(The invention discloses a cache replacement method based on maximized cache gain under a Spark platform, which firstly provides a cache gain model for measuring cache gain by analyzing the dependency of various operations in a directed acyclic graph and aims to maximize the cache gain. And then, under the condition that the operation arrival rate is known, an off-line optimal approximate solution is obtained for the maximization problem constrained by the knapsack by adopting a rounding method. And finally, under the condition that the arrival rate of the operation is unknown, acquiring the probability that each cache item should be placed in the cache by adopting a projection gradient ascending method, thereby acquiring an online adaptive cache replacement strategy meeting the maximization of cache gain. The method fully utilizes system resources, shortens the execution time of Spark application, and improves the cache hit rate.)

1. A cache replacement method based on maximized cache gain under a Spark platform is characterized by comprising the following steps:

1) analyzing a directed acyclic graph DAG (directed acyclic graph) specific to Spark operation, and formulating a cache gain objective function, wherein the operation arrival rate, the cache capacity and recalculation cost factors of cache items are considered;

2) under the condition that the operation arrival rate is known, an optimal approximate solution of the cache gain function is obtained by adopting a rounding method, the cache gain function is converted into a function by calculating the edge probability of node caching, and the converted function is defined as an approximate function;

3) finding two fractional variables and executing a rounding process on the optimal solution of the approximate function to finally generate an approximate integer solution, and proving that the cache gain of the approximate integer solution is converged in the approximate range of the optimal solution cache gain;

4) under the condition that the arrival rate of the operation is unknown, the time is divided into time periods with equal length, the cached probability of each node and the post recalculation cost of the node are calculated, and therefore unbiased estimation of the sub-gradient of the approximate function relative to the state vector is generated;

5) collecting accessed information of each cache item in a period of time, carrying out self-adaptive adjustment on the probability of a certain cache item to be placed in a cache along with the execution of the operation, and carrying out self-adaptive adjustment on a state vector at the end of each measurement period according to the unbiased estimation value;

6) calculating the variation average value of the probability that each cache item should be placed in the cache in each time period, and rounding each variation average value into an integer solution by adopting a probability rounding method;

7) and making a cache decision according to the sorting result of the variation average value of the state vector, and generating a new cache placement strategy meeting the knapsack constraint by establishing a node weight replacement model to expel low-grade nodes and insert new high-grade nodes.

2. The method for cache replacement under Spark platform according to claim 1, wherein the total workload expected by Spark application in step 1) is embodied in the form of:

where G represents a Spark job, G 'represents the set of all possible jobs to be run on the Spark parallel computing framework, and job G ∈ G' is based on a stochastic smoothing process at λGA rate of > 0 is reached; lambda [ alpha ]GThe operation arrival rate is high, and the node V in the acyclic graph is represented by a set V, namely V ∈ V, cvRepresents the time taken to regenerate node v;

the buffer gain function is embodied in the form of:

where G 'represents the set of all possible jobs to be run on the Spark parallel computing framework, job G ∈ G' is based on a stochastic smoothing process at λGNode V in the directed acyclic graph is represented by set V, i.e. V ∈ V, cvRepresents the time taken to regenerate node v; node u is a successor, i.e. a child, x, of node vv∈ {0,1} is a defined caching policy, which indicates whether node v is cached or not, if node v is cached, xvX if node v is not cached ═ 1v0; according to the operation processing flow of the Spark parallel computing framework, if the node v is cached, when the output result of the node is reused in the subsequent processing process, the node and any precursor node of the node do not need to be recalculated; the method is constrained by the size of the cache capacity in the solving process of maximizing the function;

xu represents whether the child node u is cached, if the cached Xu is 1, if the cached Xu is not 0;

succ (v) is a child node of node v, and the subsequent node (child node) set. The calculation is performed in this formula in the case where u is a child of v.

3. The method for cache replacement under Spark platform according to claim 2, wherein the specific step of obtaining the optimal approximate solution of the cache gain function by rounding in step 3) comprises:

3.1) calculating the edge probability y of the node v being cachedvThe calculation method is as follows:

yv=Pψ[xv=1]=Eψ[xv]

where psi denotes the corresponding joint probability distribution, Pψ[·]Representing the probability of node v being cached or uncached, Eψ[·]Representing the expectation, x, that node v is cached or uncachedv∈ {0,1} is a defined caching policy that indicates whether node v is cachedvX if node v is not cached ═ 1v=0;

3.2) in order to construct convex relaxation of the cache gain function F (x), the cache gain function is converted into a function F (Y) by the following calculation method:

wherein operation G ∈ G' is based on a random smoothing process at λGNode V in the directed acyclic graph is represented by set V, i.e. V ∈ V, cvRepresents the time taken to regenerate node v; node u is a successor, i.e. a child, x, of node vv∈ {0,1} is a defined caching policy, which indicates whether node v is cached or not, if node v is cached, xvX if node v is not cached ═ 1v=0,Eψ[·]Represents a desire for node v to be cached or uncached;

3.3) the objective function constraint at this point translates to sub.to yv∈[0,1]Approximately, the conversion function f (y) is defined as the function l (y):

wherein operation G ∈ G'According to a random stationary process with lambdaGThe rate of arrival is greater than 0, and the node V in the directed acyclic graph is represented by a set V, namely V ∈ V, cvRepresents the time taken to regenerate node v; node u is a successor, i.e., a child, of node v, in which case yv,yu∈[0,1];

3.4) optimal solution Y to approximation function L (Y)**Performing a rounding process, and finding two fractional variables Y given a fractional solution YvAnd yv'The solution is varied between the two fractional variables. (ii) a

3.5) taking at least one fractional variable as 0 or 1, and meeting the condition that the cache gain when the independent variable takes Y' is not less than the cache gain when the independent variable takes Y;

3.6) repeating steps 3.4) and 3.5) until no fractional variable, x', is present.

4. The method for cache replacement under Spark platform according to claim 3, wherein the sub-gradients of the approximation function L (Y) in step 4) with respect to the state vector p are sub-gradients of the approximation function L (Y)Unbiased estimation of zvThe calculation method comprises the following steps:

4.1) time is divided into time segments of equal length T > 0 during which access statistics for different nodes are collected, CG′Representing historical access records of nodes, CGA node access record representing the current operation of the node;

4.2) calculating the cached probability p of each node V ∈ V (each RDD) according to the access statistical information of different nodesv∈[0,1]|V|

4.3) when each time period of length T ends, it is necessary to adjust the state vector p ═ p for each nodev]v∈V∈[0,1]|V|

4.4) computing the late recalculation cost t of a node using node access information collected over a period of timevThe calculation method is as follows:

wherein the function (A) has the effect that when the inequality A is true, (A) is 1, otherwise (A) is 0, and the node V in the directed acyclic graph is represented by a set V, namely V ∈ V, cvRepresents the time taken to regenerate node v; node u is a successor, i.e. a child, x, of node vv∈ {0,1} is a defined caching policy, which indicates whether node v is cached or not, if node v is cached, xvX if node v is not cached ═ 1v=0;

4.5) cost t is recalculatedvDeriving a sub-gradient of the approximation function L (Y) with respect to the state vector pThe unbiased estimation of (2) is calculated as follows:

where Tv represents the recalculation cost T calculated in the above manner at the node V ∈ V during a duration TvA collection of (a).

5. The method for replacing the buffer under the Spark platform according to claim 4, wherein the method for calculating the state vector p adaptive adjustment mode at the end of the kth measurement period in step 5) is as follows: p is a radical of(k+1)←ΦΘ(p(k)(k)*z(p(k)))

Wherein gamma is(k)> 0 is a gain factor, p(k)For the state of the state vector p in the kth measuring period, ΦΘIs the projection of a relaxation constraint set theta to a convex polyhedronIt can be calculated in polynomial time. p is a radical ofv∈[0,1]|V|Representing the probability, s, that each node V ∈ V is cachedvThe output data size of the node V ∈ V is represented in MB units.

6. The method for cache replacement based on maximized cache gain under Spark platform as claimed in claim 5, wherein said step 6) specifically comprises the steps of:

6.1) calculate the mean of the variation of the probability that each cache item should be placed in the cache over each time period

6.2) at the end of each time period T, the mean value of the variation is rounded off probabilisticallyRounding to an integer solution;

6.3) mean value of variation according to the above-mentioned state vector pMaking a cache decision by establishing a node weight replacement model to expel low-grade nodes and insert new high-grade nodes so as to generate a new node satisfying the knapsack constraint x(k)∈{0,1}|V|The cache placement policy of (1);

6.4) prove that the buffer gain of the integer solution can ensure that the optimal solution x is decided in offline buffer*The approximate range of 1-1/e of the buffer gain.

7. The method for cache replacement under Spark platform according to claim 6, wherein the probability that each cache item should be placed in the cache in step 6) is the average of the variation of the probability in each time periodThe calculation method is as follows:

where k denotes the kth measurement period, p(l)Representing the state of the state vector p in the l-th measuring period, gamma(l)> 0 is a gain factor; at this time, the variation mean value satisfiesAnd is a convex combination of all nodes in Θ.

Technical Field

The invention relates to the technical field of computer big data, in particular to a cache replacement method based on maximized cache gain under a Spark platform.

Background

With the rise of big data analytics and cloud computing, large-scale cluster-based data processing has become a common paradigm for many applications and services. Data parallel computing frameworks such as Apache Spark are often used to perform such data processing on a large scale. In such data processing large input data sets are run, wherein the executed jobs also comprise hundreds of identical parallel subtasks. The time and resources required for the Spark computation framework to process these large amounts of data are enormous. However, jobs executing in such a distributed environment often have significant computational overlap, i.e., different jobs processing the same data may involve common intermediate computations, which in practice is naturally occurring. Most recently data records within the self-service industry have shown that there is 40% to 60% of intermediate result recalculation work in microsoft production clusters, while up to 78% of work in Cloudera clusters involves re-access of data.

Spark has recently become one of the most efficient memory-based distributed data processing platforms, where the elastic distributed data set RDD is an abstract data structure based on distributed memory. RDD may store data in distributed memory across multiple nodes in a cluster. The Spark parallel computing framework implements the caching function in its framework in a non-automatic manner, and developers submitting jobs decide intermediate computing results that need to be cached, i.e., decide which RDDs need to be stored in the RAM-based cache. When the RAM cache is full, some RDDs are evicted using the most common LRU cache eviction policy when performing cache evictions. Of course, developers may also choose to store the evicted RDD on a Hadoop Distributed File System (HDFS), but additional overhead is required to perform write operations on the HDFS. It is well known that RDDs cached in RAM can be retrieved and reused faster, but cache misses occur because the developer either does not explicitly cache the RDD, or has already cached and subsequently evicted. In either case, Spark must not incur significant computational overhead, i.e., Spark will be recomputed from the beginning if the requested RDD is neither stored in RAM nor in the Hadoop distributed file system. In general, cache misses cause additional delays, whether reading from a Hadoop distributed file system or fully recalculating missing RDDs. How to make an effective cache replacement strategy under the Spark platform becomes an urgent problem to be solved. The following problems are common in the current research on cache replacement under Spark platform: firstly, the consideration factor is too single when cache replacement is carried out; processing uncertainty of cache results when large-scale intensive application programs are processed; improper selection of the cache strategy can greatly reduce the execution efficiency of Spark application.

Disclosure of Invention

The invention aims to provide a cache replacement method based on maximized cache gain under a Spark platform, which considers factors such as total workload, operation arrival rate, cache capacity and recalculation cost of cache items reduced by caching, effectively reduces average execution time of Spark application, and simultaneously improves the memory utilization rate and cache hit rate of a system.

In order to achieve the above object, the method for replacing a cache based on a maximized cache gain in a Spark platform according to the present invention comprises the following steps:

1) by analyzing a directed acyclic graph DAG specific to Spark jobs, a cache gain objective function is formulated, wherein a job arrival rate lambda is consideredGSize of cache capacity K and recalculation cost t of cache entryvAnd the like;

2) under the condition that the operation arrival rate is known, the optimal approximate solution of the cache gain function is obtained by adopting a rounding method, and the cached edge probability y of the node v is calculatedvConverting the cache gain function into a function F (Y), thereby constructing convex relaxation of the cache gain function F (x), and defining the conversion function F (Y) as a function L (Y) approximately because the constraint condition of the objective function is not concave relaxation at the moment;

3) by finding two fractional variables yvAnd yv'Optimal solution Y to approximation function L (Y)**Performing a rounding process to finally generate an approximate integer solution x', and proving that the buffer gain of the approximate integer solution is converged in a 1-1/e approximate range of the optimal buffer gain of the approximate integer solution;

4) in the case where the job arrival rate is unknown, the probability p that each node V ∈ V is cached is calculated by dividing the time into time periods of equal length T > 0v∈[0,1]VAnd the cost t of recalculation of the nodes in the later periodvThereby generating a sub-gradient of the approximation function L (Y) with respect to the state vector pUnbiased estimation of (2);

5) collecting accessed information of each cache item in a period of time, carrying out self-adaptive adjustment on the probability of a certain cache item to be placed in a cache along with the execution of the operation, and carrying out self-adaptive adjustment on a state vector p at the end of each measurement period according to the unbiased estimation value;

6) calculating the variation average value of the probability that each cache item should be placed in the cache in each time periodRounding each variation average value into an integer solution by adopting a probability rounding method;

7) according to the variation average value of the state vector pMaking a cache decision by establishing a node weight replacement model to expel low-grade nodes and insert new high-grade nodes so as to generate a new node satisfying the knapsack constraint x(k)∈{0,1}|V|The cache placement policy of (1).

Preferably, the total workload expected by the Spark application in step 1) is embodied in the form of:

where G 'represents the set of all possible jobs to be run on the Spark parallel computing framework, job G ∈ G' is based on a stochastic smoothing process at λGNode V in the directed acyclic graph is represented by set V, i.e. V ∈ V, cvRepresenting the time it takes to regenerate node v. The buffer gain function is embodied in the form of:

where G' indicates to run on the Spark parallel computing frameworkAccording to a random stationary process with λ, job G ∈ GGNode V in the directed acyclic graph is represented by set V, i.e. V ∈ V, cvRepresenting the time it takes to regenerate node v. Node u is a successor, i.e. a child, x, of node vv∈ {0,1} is a defined caching policy that indicates whether node v is cachedvX if node v is not cached ═ 1v0. According to the job processing flow of the Spark parallel computing framework, if the node v is cached, when the output result of the node is reused in the subsequent processing process, the node and any predecessor node of the node do not need to be recalculated. The solution to maximize this function is constrained by the size of the cache capacity. Xu represents whether the child node u is cached, if the cached Xu is 1, if the cached Xu is not 0; succ (v) is the child node of node v, the set of subsequent nodes (child nodes). The calculation is performed in this formula in the case where u is a child of v.

Preferably, the specific step of obtaining the optimal approximate solution of the cache gain function by using the rounding method in step 3) includes:

3.1) calculating the edge probability y of the node v being cachedvThe calculation method is as follows:

yv=Pψ[xv=1]=Eψ[xv]

where psi denotes the corresponding joint probability distribution, Pψ[·]Representing the probability of node v being cached or uncached, Eψ[·]Representing the expectation, x, that node v is cached or uncachedv∈ {0,1} is a defined caching policy that indicates whether node v is cachedvX if node v is not cached ═ 1v=0。

3.2) in order to construct convex relaxation of the cache gain function F (x), the cache gain function is converted into a function F (Y) by the following calculation method:

wherein operation G ∈ G' is based on a random smoothing process at λGNode V in the directed acyclic graph is represented by set V, i.e. V ∈ V, cvRepresenting the time it takes to regenerate node v. Node u is a successor, i.e. a child, x, of node vv∈ {0,1} is a defined caching policy that indicates whether node v is cachedvX if node v is not cached ═ 1v=0,Eψ[·]Indicating a desire for node v to be cached or uncached.

3.3) the objective function constraint at this point translates to sub.to yv∈[0,1]Since this multilinear relaxation is not a concave relaxation, the conversion function f (y) is defined approximately as the function l (y):

wherein operation G ∈ G' is based on a random smoothing process at λGNode V in the directed acyclic graph is represented by set V, i.e. V ∈ V, cvRepresenting the time it takes to regenerate node v. Node u is a successor, i.e., a child, of node v, in which case yv,yu∈[0,1]。

3.4) optimal solution Y to approximation function L (Y)**The rounding process is performed, and given a fractional solution Y, two fractional variables Y are foundvAnd yv'The solution is varied between the two fractional variables.

3.5) taking at least one fractional variable to be 0 or 1, and meeting the condition that the cache gain when the independent variable takes Y' is not less than the cache gain when the independent variable takes Y.

3.6) repeating steps 3.4) and 3.5) until no fractional variables exist, finally yielding an approximate integer solution x'.

Preferably, the sub-gradients of the approximation function L (Y) in step 4) with respect to the state vector pUnbiased estimation ofMeter zvThe calculation method comprises the following steps:

4.1) time is divided into time segments of equal length T > 0 during which access statistics for different nodes are collected, CG'Representing historical access records of nodes, CGA node access record indicating the current operation of the node.

4.2) calculating the cached probability p of each node V ∈ V (each RDD) according to the access statistical information of different nodesv∈[0,1]|V|

4.3) when each time period of length T ends, it is necessary to adjust the state vector p ═ p for each nodev]v∈V∈[0,1]|V|

4.4) computing the late recalculation cost t of a node using node access information collected over a period of timevThe calculation method is as follows:

tv=∑v∈Vcv(xv+∑u∈succ(v)xu≤1)

the function (A) is that when the inequality A is true, (A) is 1, otherwise (A) is 0. the node V in the directed acyclic graph is represented by a set V, namely V ∈ V, cvRepresenting the time it takes to regenerate node v. Node u is a successor, i.e. a child, x, of node vv∈ {0,1} is a defined caching policy that indicates whether node v is cachedvX if node v is not cached ═ 1v=0。

4.5) cost t is recalculatedvA sub-gradient of the approximation function L (Y) with respect to the state vector p is obtainedThe unbiased estimation of (2) is calculated as follows:

whereinvRepresenting the recalculation cost T calculated in the manner described above at node V ∈ V over a duration TvA collection of (a).

Preferably, at the end of the kth measurement period in step 5), the calculation method of the state vector p adaptive adjustment mode is as follows:

p(k+1)←ΦΘ(p(k)(k)*z(p(k)))

wherein gamma is(k)> 0 is a gain factor, p(k)For the state of the state vector p in the kth measuring period, ΦΘIs the projection of the set of relaxation constraints Θ onto a convex polyhedron, so it can be computed in polynomial time. p is a radical ofv∈[0,1]|V|Representing the probability, s, that each node V ∈ V is cachedvThe output data size of the node V ∈ V is shown in units of MB.

Preferably, the step 6) specifically comprises the following steps:

6.1) calculating the mean value of the variation of the probability that each cache item should be placed in the cache during each time period

6.2) at the end of each time period T, the mean value of the variation is rounded off probabilisticallyRounding to an integer solution;

6.3) mean value of variation according to the above-mentioned state vector pMaking a cache decision on the sequencing result, namely, expelling low-grade nodes and inserting new high-grade nodes by establishing a node weight replacement model so as to generate a new node meeting the knapsack constraint x(k)∈{0,1}|V|The cache placement policy of (1);

6.4) prove that the buffer gain of the integer solution can ensure that the optimal solution x is decided in offline buffer*The approximate range of 1-1/e of the buffer gain.

Preferably, the probability that each cache item in the step 6) should be placed in the cache is the average value of the variation of each time periodThe calculation method is as follows:

where k denotes the kth measurement period, p(l)Representing the state of the state vector p in the l-th measuring period, gamma(l)> 0 is a gain factor. When the variation average value is satisfiedAnd is a convex combination of all nodes in Θ.

The traditional cache replacement strategies under the Spark platform have too single consideration in cache replacement, so that the cache replacement strategies are not applicable in some special scenes. In addition, when Spark handles large-scale intensive applications, multiple caching strategies may be selected for the entire computing process. If the user selects the storage strategy for calculation, the result is full of uncertainty, and whether the intermediate result should be cached depends on the experience of the user. This uncertainty does not take full advantage of Spark's high performance advantage.

The invention provides a cache replacement method based on maximized cache gain by analyzing the characteristic that dependency exists among operations in a Spark operation directed acyclic graph. The scheduling method firstly analyzes the dependence of various operations in the directed acyclic graph, provides a cache gain model for measuring cache gains, and aims to maximize the cache gains. And then, under the condition that the operation arrival rate is known, an off-line optimal approximate solution is obtained for the maximization problem constrained by the knapsack by adopting a rounding method. And finally, under the condition that the arrival rate of the operation is unknown, acquiring the probability that each cache item should be placed in the cache by adopting a projection gradient ascending method, thereby acquiring an online adaptive cache replacement strategy meeting the maximization of cache gain. The method fully utilizes system resources, shortens the execution time of Spark application and improves the cache hit rate.

Drawings

Fig. 1 is a flowchart of a cache replacement method based on maximizing cache gain when the arrival rate of jobs under the Spark platform is known according to the present invention.

Fig. 2 is a flowchart of a cache replacement method based on maximizing cache gain when the arrival rate of jobs under the Spark platform is unknown according to the present invention.

Detailed Description

The invention will be further described in detail with reference to the following drawings and specific examples, which are not intended to limit the invention, so as to clearly understand the invention.

As shown in fig. 1, the cache replacement method based on maximized cache gain in the Spark platform provided by the present invention is proposed based on the default cache replacement method in the current Spark platform, in combination with the characteristics of Spark task execution. As shown in fig. 1, the algorithm includes the following steps:

1) analyzing a directed acyclic graph DAG (directed acyclic graph) specific to Spark operation, and formulating a cache gain objective function, wherein the factors such as operation arrival rate, cache capacity size, recalculation cost of cache items and the like are considered, and the specific steps comprise:

1.1) calculating the time c taken to regenerate node vvThe two cases of recalculation and reading from the disk are included, and the time spent is cv' and cv *The calculation method is as follows:

cv′=endTime-startTime

cv *=(SPM+DPM)·sv

cv=min(cv′,(SPM+DPM)·sv)

wherein time cvThe startTime and endTime indicate that the parent node completes the process, for the minimum of the time recalculated by the parent node and the time spent reading from the diskThe start time and end time of the operation, SPM and DPM respectively represent the time taken for each MB of data for each node to be serialized and deserialized, svThe size of the buffer space occupied by the output data of the node v is represented.

1.2) if all nodes are not cached, calculating the total workload W (G (V, E)) of the operation G:

wherein the node V in the directed acyclic graph is represented by a set V, i.e. V ∈ V, cvRepresenting the time it takes to regenerate node v.

1.3) calculating the total workload expected by Spark application as follows:

where G 'represents the set of all possible jobs to be run on the Spark parallel computing framework, job G ∈ G' is based on a stochastic smoothing process at λGNode V in the directed acyclic graph is represented by set V, i.e. V ∈ V, cvRepresenting the time it takes to regenerate node v.

1.4) by analyzing DAG (directed acyclic graph) specific to Spark operation, the objective function of the formulated measure cache gain is as follows:

Sub.to:

where G 'represents the set of all possible jobs to be run on the Spark parallel computing framework, job G ∈ G' is based on a stochastic smoothing process at λGNode V in the directed acyclic graph is represented by set V, i.e. V ∈ V, cvRepresenting regeneration generation node vThe time taken. Node u is a successor, i.e., child, node of node v, xv∈ {0,1} is a defined caching policy that indicates whether node v is cachedvX if node v is not cached ═ 1v=0。svThe output data size representing node V ∈ V is in MB units, K being the given buffer space size.

2) Under the condition that the operation arrival rate is known, the optimal approximate solution of the cache gain function is obtained by adopting a rounding method, and the method specifically comprises the following steps:

2.1) calculating the cached marginal probability of the node v, wherein the calculation mode is as follows:

yv=Pψ[xv=1]=Eψ[xv]

where psi denotes the corresponding joint probability distribution, Pψ[·]Representing the probability of node v being cached or uncached, Eψ[·]Representing the expectation, x, that node v is cached or uncachedv∈ {0,1} is a defined caching policy that indicates whether node v is cachedvX if node v is not cached ═ 1v=0。

2.2) to construct convex relaxation of the cache gain function f (x), the cache gain function is converted into a function f (y) by the following calculation method:

wherein operation G ∈ G' is based on a random smoothing process at λGNode V in the directed acyclic graph is represented by set V, i.e. V ∈ V, cvRepresenting the time it takes to regenerate node v. Node u is a successor, i.e. a child, x, of node vv∈ {0,1} is a defined caching policy that indicates whether node v is cachedvX if node v is not cached ═ 1v=0,Eψ[·]Indicating a desire for node v to be cached or uncached.

2.3) the objective function constraint at this point is converted intoSub.to:yv∈[0,1]Since this multilinear relaxation is not a concave relaxation, the conversion function f (y) is defined approximately as the function l (y):

wherein operation G ∈ G' is based on a random smoothing process at λGNode V in the directed acyclic graph is represented by set V, i.e. V ∈ V, cvRepresenting the time it takes to regenerate node v. Node u is a successor, i.e., a child, of node v, in which case yv,yu∈[0,1]。

2.4) optimal solution Y to approximation function L (Y)**The rounding process is performed, and given a fractional solution Y, two fractional variables Y are foundvAnd yv'The solution is varied between the two fractional variables.

2.5) taking at least one fractional variable to be 0 or 1, and meeting the condition that the cache gain when the independent variable takes Y' is not less than the cache gain when the independent variable takes Y.

2.6) repeating steps 2.4) and 2.5) until no fractional variables exist, finally yielding an approximate integer solution x'.

3) Proving that the cache gain of the approximate integer solution of the cache gain function obtained in the step 2) is converged in the approximate range of 1-1/e of the optimal cache gain of the cache gain function, as shown by the following inequality:

wherein x*As an optimal solution of the function F (x), Y*Is the optimal solution of the function F (Y), Y**Is the optimal solution of the function l (y). x' is the approximate integer solution obtained in the step 2).

4) Under the condition that the operation arrival rate is unknown, the sub-gradient unbiased estimation of an approximation function L (Y) needs to be generated, and the specific steps are as follows:

4.1) time is divided into time segments of equal length T > 0 during which access statistics of different nodes are collectedTo CG'Representing historical access records of nodes, CGA node access record indicating the current operation of the node.

4.2) calculating the cached probability p of each node V ∈ V (each RDD) according to the access statistical information of different nodesv∈[0,1]|V|

4.3) when each time period of length T ends, it is necessary to adjust the state vector p ═ p for each nodev]v∈V∈[0,1]|V|

4.4) computing the late recalculation cost t of a node using node access information collected over a period of timevThe calculation method is as follows:

tv=∑v∈Vcv(xv+∑u∈succ(v)xu≤1)

the function (A) is that when the inequality A is true, (A) is 1, otherwise (A) is 0. the node V in the directed acyclic graph is represented by a set V, namely V ∈ V, cvRepresenting the time it takes to regenerate node v. Node u is a successor, i.e. a child, x, of node vv∈ {0,1} is a defined caching policy that indicates whether node v is cachedvX if node v is not cached ═ 1v=0。

4.5) cost t is recalculatedvA sub-gradient of the approximation function L (Y) with respect to the state vector p is obtainedThe unbiased estimation of (2) is calculated as follows:

whereinvRepresenting the recalculation cost T calculated in the manner described above at node V ∈ V over a duration TvA collection of (a).

5) Collecting accessed information of each cache item in a period of time, carrying out self-adaptive adjustment on the probability that a certain cache item should be placed in a cache along with the execution of the operation, and according to the unbiased estimation value, at the end of the kth measurement period, carrying out self-adaptive adjustment on the state vector p in the following mode:

p(k+1)←ΦΘ(p(k)(k)*z(p(k)))

wherein gamma is(k)> 0 is a gain factor, p(k)For the state of the state vector p in the kth measuring period, ΦΘIs the projection of the set of relaxation constraints Θ onto a convex polyhedron, so it can be computed in polynomial time. p is a radical ofv∈[0,1]|V|Representing the probability, s, that each node V ∈ V is cachedvThe output data size of the node V ∈ V is shown in units of MB.

6) Calculating the variation average value of the probability that each cache item should be placed in the cache in each time period, and rounding each variation average value into an integer solution by adopting a probability rounding method, wherein the specific steps are as follows:

6.1) mean of variation of the probability that each cache entry should be placed in the cache over each time periodThe calculation method is as follows:

where k denotes the kth measurement period, p(l)Representing the state of the state vector p in the l-th measuring period, gamma(l)> 0 is a gain factor. When the variation average value is satisfiedAnd is a convex combination of all nodes in Θ.

6.2) at the end of each time period T, the invention uses a probabilistic rounding method to average the variationRounding to an integer solution.

6.3) mean value of variation according to the above-mentioned state vector pMaking a cache decision on the sequencing result, namely, expelling low-grade nodes and inserting new high-grade nodes by establishing a node weight replacement model so as to generate a new node meeting the knapsack constraint x(k)∈{0,1}|V|The cache placement policy of (1).

6.4) prove that the buffer gain of the integer solution can ensure that the optimal solution x is decided in offline buffer*The approximate range of 1-1/e of the buffer gain.

7) Each integer solution obtained in the step 6) represents a new caching strategy, and the caching strategy meeting the caching gain maximization at the moment is continuously constructed based on the continuously adjusted probability values. The study procedure of the present invention is detailed below:

the Spark parallel computing framework implements the caching function in its framework in a non-automatic manner, and developers submitting jobs decide intermediate computing results that need to be cached, i.e., decide which RDDs need to be stored in the RAM-based cache. But when processing jobs consisting of operations with complex dependencies, determining which intermediate results to cache is a challenge. Although many scholars research cache replacement strategies under the Spark platform at present, the cache replacement strategies are not applicable in some special scenes due to single consideration factors and uncertainty of cache result selection when large-scale intensive applications are processed. Therefore, before performing cache replacement operation on the Spark platform, it is necessary to provide a function model for measuring cache gain by analyzing the dependency of various operations in the directed acyclic graph DAG of Spark operation. In addition, the average execution time, the system memory utilization rate and the cache hit rate of the Spark application under various scenes are effectively ensured by maximizing the cache gain function.

Related parameter definition in cache replacement method

(1) Time c taken to regenerate node vv: time cvThe minimum of the time to recalculate from the parent node and the time to read from disk, the time spent in both cases is cv' and cv *Where startTime and endTime represent the start time and end time of the operation completed by the parent node, SPM and DPM represent the time spent in serializing and deserializing each MB of data of each node, svAnd the size of the cache space occupied by the output data of the node v is represented. Time cvThe calculation method is as follows:

cv′=endTime-startTime (1)

cv *=(SPM+DPM)·sv(2)

cv=min(cv′,(SPM+DPM)·sv) (3)

(2) caching strategy xv∈ {0,1}, indicating whether node v is cachedvX if node v is not cached ═ 1v=0。

(3) Mean value of variationThe average of the variation of the probability that each cache entry should be placed in the cache during each time period is calculated as follows:

where k denotes the kth measurement period, p(l)Representing the state of the state vector p in the l-th measuring period, gamma(l)> 0 is a gain factor. When the variation average value is satisfiedAnd is a convex combination of all nodes in Θ.

(4) About the state of the approximation function L (Y)Sub-gradients of quantity pUnbiased estimation of zv

tv=∑v∈Vcv(xv+∑u∈succ(v)xu≤1) (5)

Wherein the function (A) has the effect that when the inequality A holds, (A) is 1, otherwise (A) is 0, the node V in the directed acyclic graph is represented by the set V, i.e. V ∈ V, cvRepresenting the time it takes to regenerate node v. Node u is a successor, i.e., child, node of node v, xv∈ {0,1} is a defined caching policy that indicates whether node v is cachedvX if node v is not cached ═ 1v=0。Representing T calculated in the manner described above at node V ∈ V over a duration TvA collection of (a).

(5) Each node v replaces the weight Qv: the main idea of the cache replacement method provided by the invention is to give priority to the cache replacement method with high request rate (high gradient component)) And a low dimension svAnd caching the node would reduce the overall workload by a large amount, i.e., if node v has a higher replacement weight, then the node should go into cache.

Wherein R isratveRepresenting the request rate of node V ∈ V, i.e. the approximation function L (Y) vs. xvPartial derivative of (a), xv∈ {0,1} is a defined caching policy that indicates whether node v is cachedvX if node v is not cached ═ 1v=0。svG 'represents the set of all possible jobs to be run on the Spark parallel computing framework, job G ∈ G' is in λ according to a random stationary processGA rate of > 0 is reached. Δ (ω) represents the difference in the total workload during which node v performs all jobs when it is not cached.

The cache replacement method based on the maximized cache gain in the Spark environment is provided by combining a rounding method and a probability rounding method based on the measured cache gain. The method firstly analyzes the dependency of various operations in the directed acyclic graph DAG of Spark operation, and provides a mathematical model based on a cache gain concept, aiming at accurately indicating the total workload reduced due to a certain cache strategy, wherein the time spent by each node for operating is calculated according to the formulas (1) - (3). And then, under the condition that the operation arrival rate is known, adopting a rounding method to solve the offline optimal approximate solution of the submodule maximization problem with the knapsack constraint. In the case that the job arrival rate is unknown, the variation average value of the probability that each RDD should be placed in the buffer is obtained according to the projection gradient ascending method in formula (4). And finally, sequencing the variation average values of all the nodes, determining an online adaptive cache replacement strategy meeting the maximization of cache gain according to the replacement weight values of all the nodes given by formulas (7) to (8), and proving that the cache gain of the strategy is converged in the approximate range of 1-1/e of the optimal solution cache gain of the offline cache replacement strategy.

Pseudo code description of scheduling method

From the pseudo-code description of the algorithm, it can be seen that line 3 updates the cache contents according to the updateCache function of line 4 after each job is executed, from the updateCache function of lines 34-42 it can be seen that this function considers the node historical cost scores and the current cost scores in combination, which first updates the cost of all accessed nodes, thereby determining the mean of the change of the above-mentioned state vector p collected with decay rate βSince there are on average 6 stages (stages) per job in the experiment, the decay rate β is valued at 0.6 lines 6-22 when iterating a node in each job in a recursive manner, the auxiliary function estimateCost of lines 23-33 is called to compute and record the time cost of each node in the job, which will be used to determine the cache content in the updateCache function, lines 14-18 then traverse the child nodes of each node, line 20 starts accessing and computing the node once all its child nodes are ready, line 41 last, the updateCache function computes the mean of the change to the state vector pAnd sequencing, setting a node weight value replacement model to expel the low-grade nodes and insert new high-grade nodes, thereby refreshing the whole cache space. Those not described in detail in this specification are prior art to the knowledge of those skilled in the art.

20页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:混合逻辑到物理高速缓存方案

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类