Provisioning using prefetched data in a serverless computing environment

文档序号:1316030 发布日期:2020-07-10 浏览:19次 中文

阅读说明:本技术 无服务器计算环境中的使用经预取的数据的配设 (Provisioning using prefetched data in a serverless computing environment ) 是由 嶋村孔明 阿米特·库马尔·萨哈 戴博乔提·杜塔 于 2018-11-30 设计创作,主要内容包括:提供了一种用于对无服务器计算集群进行数据配设的方法。接收多个用户定义函数(UDF),以用于在无服务器计算集群的工作节点上执行。对于第一UDF,确定执行第一UDF所需的UDF数据的一个或多个数据位置。在无服务器计算集群的主节点处,接收多个工作节点票证,每个票证指示相应工作节点的资源可用性。分析一个或多个数据位置和多个工作节点票证,以确定能够执行第一UDF的合格工作节点。主节点将预取命令发送到合格工作节点中的一个或多个,通过在第一UDF被分配用于执行之前存储经预取的第一UDF数据来使得合格工作节点变为用于第一UDF的经配设的工作节点。(A method for data provisioning of a serverless computing cluster is provided. A plurality of User Defined Functions (UDFs) are received for execution on worker nodes of a serverless computing cluster. For a first UDF, one or more data locations of UDF data needed to execute the first UDF are determined. At a primary node of a serverless computing cluster, a plurality of work node tickets are received, each ticket indicating resource availability of a respective work node. The one or more data locations and the plurality of work node tickets are analyzed to determine eligible work nodes capable of executing the first UDF. The master node sends a prefetch command to one or more of the eligible working nodes, causing the eligible working nodes to become provisioned working nodes for the first UDF by storing prefetched first UDF data before the first UDF is allocated for execution.)

1. A method for data provisioning, comprising:

receiving, at a serverless computing cluster, a plurality of User Defined Functions (UDFs) for execution on one or more worker nodes of the serverless computing cluster;

determining, via the serverless computing cluster, one or more data locations of first UDF data required for future execution of a first UDF of the plurality of UDFs;

receiving, at a master node of the serverless computing cluster, a plurality of work node tickets, each work node ticket indicating resource availability of a respective work node;

analyzing, via the primary node, the one or more data locations and the plurality of work node tickets to determine eligible work nodes capable of executing the first UDF;

sending, via the master node, a prefetch command to one or more of the qualified working nodes, the prefetch command generated from the determined one or more data locations such that the one or more of the qualified working nodes become provisioned working nodes, wherein each provisioned working node stores prefetched UDF data before the first UDF has been allocated for execution; and

performing UDF allocation among the provisioned working nodes.

2. The method of claim 1, wherein:

the provisioned working nodes each store the same prefetched UDF data; and is

The UDF allocation includes selecting one of the provisioned working nodes to execute the first UDF.

3. The method of claim 2, wherein UDF allocation further comprises: selecting one or more working nodes from a remainder of the provisioned working nodes to execute one or more additional UDFs that are each determined to require the first UDF data.

4. The method of claim 3, wherein a number of provisioned working nodes in the remaining portion is equal to a number of the additional UDFs determined to require the first UDF data.

5. The method of any preceding claim, wherein:

in response to determining that the first UDF can be processed in parallel, a plurality of prefetch commands are sent such that the provisioned working node stores different prefetched UDF data; and is

The UDF allocation includes selecting two or more of the provisioned working nodes to execute the first UDF such that a combination of different prefetched UDF data stored by the selected two or more provisioned working nodes contains all of the first UDF data.

6. A method according to any preceding claim, wherein each working node ticket is generated by a working node in response to a change in resource availability or status at the working node, or in response to expiry of a predefined ticket refresh period, which is reset when the working node generates a new working node ticket.

7. The method of any preceding claim, wherein the prefetch command is sent such that prefetched UDF data is stored for a plurality of UDFs before any of them has been allocated to a working node for execution.

8. The method of any preceding claim, wherein a prefetch command is sent such that each of the one or more working nodes is provisioned to store prefetched UDF data for at least one of the plurality of UDFs.

9. The method of any preceding claim, wherein the one or more data locations comprise Uniform Resource Identifiers (URIs) pointing to one or more of: the working nodes of the serverless computing cluster; a dedicated storage node of the serverless computing cluster, the dedicated storage node being different from the worker node; and an external node external to the serverless computing cluster.

10. The method of any preceding claim, further comprising:

determining that each of the provisioned worker nodes is no longer a qualified worker node capable of executing the first UDF, the determination based at least in part on a worker node ticket received from the respective provisioned worker node; and

analyzing any remaining working nodes to identify a new working node capable of receiving the first UDF data from one or more of the provisioned working nodes and subsequently executing the first UDF, wherein the new working node is identified based on its proximity to the provisioned working nodes such that transmission time of the first UDF data to the new working node is minimized.

11. A computer-readable medium having instructions stored therein, which when executed by at least one processor of a serverless computing cluster, cause the at least one processor to perform operations comprising:

receiving a plurality of User Defined Functions (UDFs) for execution on one or more worker nodes of the serverless computing cluster;

determining one or more data locations of first UDF data required for future execution of a first UDF of the plurality of UDFs;

receiving, at a master node of the serverless computing cluster, a plurality of work node tickets, each work node ticket indicating resource availability of a respective work node;

analyzing, via the primary node, the one or more data locations and the plurality of work node tickets to determine eligible work nodes capable of executing the first UDF;

sending, via the master node, a prefetch command to one or more of the qualified working nodes, the prefetch command generated from the determined one or more data locations such that the one or more of the qualified working nodes become provisioned working nodes, wherein each provisioned working node stores prefetched UDF data before the first UDF has been allocated for execution; and

performing UDF allocation among the provisioned working nodes.

12. The computer-readable medium of claim 11, wherein:

the provisioned working nodes each store the same prefetched UDF data; and is

The UDF allocation includes selecting one of the provisioned working nodes to execute the first UDF.

13. The computer-readable medium of any of claims 11 to 12, wherein the instructions further cause the at least one processor to perform UDF allocation by: selecting one or more working nodes from a remainder of the provisioned working nodes to execute one or more additional UDFs that are each determined to require the first UDF data.

14. The computer-readable medium of claim 13, wherein a number of provisioned working nodes in the remaining portion is equal to a number of the additional UDFs determined to require the first UDF data.

15. The computer-readable medium of any of claims 11 to 14, wherein:

in response to determining that the first UDF can be processed in parallel, a plurality of prefetch commands are sent such that the provisioned working node stores different prefetched UDF data; and is

The UDF allocation includes selecting two or more of the provisioned working nodes to execute the first UDF such that a combination of different prefetched UDF data stored by the selected two or more provisioned working nodes contains all of the first UDF data.

16. The computer-readable medium of any of claims 11 to 15, wherein each worker node ticket is generated by a worker node in response to a change in resource availability or status at the worker node, or in response to expiration of a predefined ticket refresh period that is reset when the worker node generates a new worker node ticket.

17. The computer-readable medium of any of claims 11 to 16, wherein the prefetch command is sent such that prefetched UDF data is stored for a plurality of UDFs before any of the plurality of UDFs has been allocated to a working node for execution.

18. The computer-readable medium of any of claims 11 to 17, wherein a prefetch command is sent such that each of the one or more working nodes is provisioned to store prefetched UDF data for at least one of the plurality of UDFs.

19. The computer-readable medium of any of claims 11 to 18, wherein the one or more data locations comprise Uniform Resource Identifiers (URIs) that point to one or more of: the working nodes of the serverless computing cluster; a dedicated storage node of the serverless computing cluster, the dedicated storage node being different from the worker node; and an external node external to the serverless computing cluster.

20. The computer-readable medium of any of claims 11-19, wherein the instructions further cause the at least one processor to perform operations comprising:

determining that each of the provisioned worker nodes is no longer a qualified worker node capable of executing the first UDF, the determination based at least in part on a worker node ticket received from the respective provisioned worker node; and

analyzing any remaining working nodes to identify a new working node capable of receiving the first UDF data from one or more of the provisioned working nodes and subsequently executing the first UDF, wherein the new working node is identified based on its proximity to the provisioned working nodes such that transmission time of the first UDF data to the new working node is minimized.

21. A computing system arranged to perform the method of any of claims 1 to 10.

Technical Field

The present technology relates generally to serverless computing environments, and more particularly to data provisioning and task provisioning in serverless computing environments.

Background

The concept of serverless computing is rapidly gaining popularity in the field of cloud computing. No server calculates this name is a little misnomer because a server is still needed. Instead, the name stems from the fact that: server management and capacity planning decisions may be completely hidden or otherwise separated from end users and consumers of computing power. Advantageously, an end user can simply provide user-defined functions (UDFs) or other computing jobs to a serverless computing environment, at which point the necessary computing resources are dynamically allocated to execute these UDFs or computing jobs, without the user needing to manage (or even know) any underlying hardware or software resources at any time.

In a serverless computing environment, UDFs are often received from multiple end users. The order in which UDFs are executed may depend on a number of factors, from the aspects of the UDFs themselves (computational requirements, efficiency requirements, etc.) to the aspects of the services provided by the operator of the serverless computing environment (maximum latency guarantees, payment priorities, etc.). Regardless of how UDFs are ordered, they are typically saved in one or more task queues prior to execution. The main drawback of serverless computing is the way in which tasks or UDFs are provisioned.

UDFs typically require a certain amount of data to execute. For example, a UDF may require a data set for analysis or operation, or a UDF may require several libraries to run through. In a serverless computing environment, often times, a UDF will not be scheduled to execute on the same node where the necessary data resides. Alternatively, one or more nodes are selected to execute the UDF and the necessary data must first be moved to these nodes before the UDF can begin execution, potentially introducing a significant amount of delay to the overall UDF computation process. Therefore, improvements are needed.

Drawings

In order to describe the manner in which the above-recited and other advantages and features of the disclosure can be obtained, a more particular description of the principles briefly described above will be rendered by reference to specific examples thereof which are illustrated in the appended drawings. Understanding that these drawings depict only examples of the disclosure and are not therefore to be considered to be limiting of its scope, the principles herein are described and explained with additional specificity and detail through the use of the accompanying drawings in which:

FIG. 1 depicts an example environment in which aspects of the present disclosure may operate;

FIG. 2 depicts an example architecture of a serverless computing environment in which aspects of the present disclosure are implemented;

FIG. 3 depicts a flow chart of a method of the present disclosure; and is

Fig. 4A and 4B illustrate schematic diagrams of example computing systems for use with example system embodiments.

Detailed Description

SUMMARY

The detailed description set forth below is intended as a description of various configurations of the subject technology and is not intended to represent the only configurations in which the subject technology can be practiced. The accompanying drawings are incorporated herein and constitute a part of the detailed description. The detailed description includes specific details for the purpose of providing a more thorough understanding of the subject technology. It will be apparent, however, that the subject technology is not limited to the specific details set forth herein, and may be practiced without these details. In some instances, structures and components are shown in block diagram form in order to avoid obscuring the concepts of the subject technology.

The present technology includes methods and computer-readable media for data provisioning of serverless computing environments (alternatively referred to as serverless computing clusters) with prefetched data, thereby reducing execution latency of tasks and functions by the serverless computing environments. In some embodiments, this data provision is provided as a new step that occurs before any existing step of task allocation between a worker node and the UDF of the serverless computing environment.

According to the method, a plurality of User Defined Functions (UDFs) are received for execution on worker nodes of a serverless computing cluster. For a first UDF, one or more data locations of UDF data needed to execute the first UDF are determined. At a primary node of a serverless computing cluster, a plurality of work node tickets (tickets) are received, each ticket indicating resource availability of a respective work node. The one or more data locations and the plurality of work node tickets are analyzed to determine eligible work nodes capable of executing the first UDF. The master node sends a prefetch command to one or more of the eligible working nodes, causing the eligible working nodes to become provisioned working nodes for the first UDF by storing prefetched first UDF data before the first UDF is allocated for execution.

Detailed Description

Fig. 1 illustrates an example network environment 100 in which the present disclosure may operate. The environment 100 includes a plurality of user devices (shown here as user devices 102, 104, and 106), although it should be understood that various numbers of user devices may be provided. The user devices are coupled to one or more of the network 110 and the serverless computing environment 120 in a variety of ways. Each user device is capable of providing access to one or more users (shown here as user 1, user 2, and user 3). As will be understood by one of ordinary skill in the art, the user equipment may be provided by one or more of: mobile phones, tablets, laptops, desktops, workstations, and various other computing devices (both mobile and stationary). Generally, it is contemplated that the user device includes an input means for allowing a user of the device to input or otherwise define one or more user-defined functions (UDFs) to be executed (i.e., computed) by the serverless computing environment 120. In some embodiments, the user devices 102, 104, 106 can store one or more UDFs in memory, such that a user (e.g., user 1, user 2, or user 3) can simply select a predefined UDF for transmission and execution on the serverless computing environment 120, or can select a predefined UDF for editing prior to transmission and execution on the serverless computing environment 120.

As previously described, each user device is coupled to one (or both) of the network 110 and the serverless computing environment 120. For example, the user device 104 is only coupled to the network 110, the network 110 acting as a relay for communications between the user device 104 and the serverless computing environment 120. On the other hand, the user device 106 is only coupled to the serverless computing environment 120, meaning that the user device 106 is communicatively isolated from other user devices and any other devices connected to the network 110. This direct coupling with the serverless computing environment 120 can provide greater security at the expense of greater inconvenience to more general-purpose operations and communications. The third configuration is illustrated by the user device 102, which user device 102 has a direct communication link to both the network 110 and the serverless computing environment 120. In this way, the user device 102 and the user device 104 are communicatively coupled via the network 110, which may be beneficial when both user 1 and user 2 are working on a common project or otherwise collaborating, while in situations where greater security may be required, the user device 102 still maintains a direct communication link with the serverless computing environment 120.

The aforementioned communication links may be wired or wireless, employing various communication techniques and standards known in the art, without departing from the scope of the present disclosure. In some embodiments, serverless computing environment 120 can be publicly accessible via the internet and its constituent hardware components arranged in a distributed or centralized manner. In some embodiments, one or more of the constituent hardware components of the serverless computing environment 120 may be co-located (co-located) with one or more user devices. This may occur when the serverless computing environment 120 may not be publicly accessible and instead is provided in an access-restricted manner (although it is noted that the above co-location is not a requirement for implementing access restrictions). For example, a serverless computing environment 120 with limited access may be implemented such that only collocated computing devices or users (employees as approved business entities) may access the serverless computing environment 120.

As shown, the architectural overview of the serverless computing environment 120 presented in fig. 1 is shown as including a REST API122 (representative state transfer application programming interface) for implementing an interface layer between the serverless computing environment 120 and external networks and computing devices that wish to access or otherwise utilize the computing resources of the serverless computing environment. As shown, these computing resources are provided by a master node 124 (alternatively referred to herein as a scheduling node or an execution node), a plurality of worker nodes 126, and a plurality of storage nodes 128, although it should again be understood that the computing resources of a serverless computing environment may be implemented with various other components without departing from the scope of this disclosure.

Broadly, a plurality of storage nodes 128 are used to store raw or processed data and may be provided as computing elements (e.g., having computing power much greater than that required to manage storage functions) or storage elements (e.g., having computing power slightly greater than that required to manage storage functions). The plurality of worker nodes 126 retrieve data (raw or previously processed) from the plurality of storage nodes 128 and then process the retrieved data according to one or more UDFs being executed on the worker nodes. A given worker node is typically provided with a certain amount of local storage, but this storage is typically consumed primarily by storage of: data currently being processed by the working node, and/or final and intermediate results of processing and executing the UDF. In some embodiments, the functionality of the worker nodes 126 and the storage nodes 128 may be combined such that only a single type of node is provided in the serverless computing environment 120. The plurality of worker nodes 126 may be homogeneous or may differ in their hardware components and computing capabilities. Similarly, the plurality of storage nodes 128 may be homogeneous or may differ in their hardware components and computing capabilities. Finally, as will be appreciated by one of ordinary skill in the art, the worker nodes 126 and the storage nodes 128 may be arranged in any manner (distributed, centralized, or otherwise) known to be compatible with implementations of serverless computing environments (e.g., serverless computing environment 120).

The primary node 124 is communicatively coupled between the REST API122 and a plurality of working nodes 126 and a plurality of storage nodes 128, and is used to assign incoming UDFs to appropriate subsets of the working nodes and storage nodes. For example, the master node 124 may analyze a given UDF to determine the computing power required or requested, and may then analyze the current state of the plurality of worker nodes 126 to determine which subsets of worker nodes (if any) are eligible to execute the UDF. To analyze the current state of the worker node, the worker node generates and sends a ticket or update packet to the master node 124. These tickets may include, but are not limited to, information such as: the current state/health of the node (health), the current utilization of the node, the current storage availability of the node, etc. The ticket may be generated in response to any change in state of a given working node, or any change in one of the parameters encoded within the ticket. Tickets may also be generated in response to the expiration of some predefined refresh period, e.g., if a working node has not generated a ticket within the last five minutes, the working node may be triggered to generate a new ticket even if there is no state change.

As such, the master node 124 may be broadly considered a central executor and a central information center in the serverless computing environment 120, operable to monitor the status of the worker nodes and storage nodes, and further operable to analyze the UDFs and assign the UDFs to task queues (in which the UDFs wait to execute). The master node 124 will assign a given UDF from the task queue to one or more working nodes for execution. In embodiments where storage nodes are distinguished from working nodes, this would not be the case: the data needed to execute the UDF resides on the same node that is to process the data. Even in embodiments with at least some combined working/storage nodes, the data needed to perform a UDF is unlikely to be stored on the same node that is to process the data, particularly in a serverless computing environment with a very large number of nodes, very large or distributed data sets, or both.

Thus, when a newly allocated UDF is received, the working node almost always has to locate and retrieve the required data before starting to execute the UDF. During the process, the working node is in an inactive state and the UDF is not processed. Depending on various network characteristics in a serverless computing environment (e.g., bottlenecks, bandwidth issues, link down, node down, etc.), this delay may even be greater than that expected for locating and retrieving data only. Thus, it was found that processing of UDFs in a serverless computing environment suffers higher delay than normal UDF execution delay due to the fact that: provisioning of the execution environment (e.g., launching (spin up) certain docker containers) does not begin until after each request is received at the working node or the UDF executes the task. Thus, while optimization between various objectives (e.g., minimizing the average latency of the UDF in the task queue, minimizing the longest latency of the UDF in the task queue, maximizing the utilization of the working nodes, etc.) may be performed using the master node 124, such optimization is inherently limited by not considering the latency incurred by data retrieval.

FIG. 2 illustrates an example architecture 200 of a serverless computing environment that implements the data provisioning solution of the present disclosure. The REST API 202 (or other listener layer) receives or retrieves various user-defined functions (UDFs), represented herein as tasks 204 ("tasks" and "UDFs" are used interchangeably herein). As previously described, these UDFs or tasks may be generated by users of one or more user computing devices and actively pushed to the REST API 202 or passively pulled by the REST API 202.

From the REST API 202, the tasks 204 are input to a task queue 210, and the tasks 204 are typically stored in the task queue 210 (e.g., a first-in-first-out (FIFO) queue) based on the order in which the tasks 204 are received, although it should be understood that the task queue 210 may implement other ways of queuing. Task queue 210 is communicatively coupled to a master node 220, and master node 220 receives a plurality of tickets 232 from a corresponding plurality of worker nodes 230 (here represented as respective nodes 230a, 230b, 230 c). The tickets and nodes described herein may be considered to provide and inherit the same characteristics attributed to the tickets and nodes previously described with respect to fig. 1.

Whereas a conventional serverless computing environment would simply retrieve tasks from the task queue 210 and assign tasks to the worker nodes 230, at which point a given worker node would retrieve the necessary data from one or more of the plurality of storage nodes 240 and then begin executing the tasks, the illustrated architecture 200 provides an additional component (data provisioning module 206) to avoid such delays associated with conventional serverless computing environments.

The data provisioning module 206 may be communicatively coupled between the REST API 202 and the task queue 210 as shown, although alternative placements are possible (if desired). Through this coupling, data provisioning module 206 also receives a copy (or is able to check a copy) of each task 204 that is being input into task queue 210. In some embodiments, data provisioning module 206 may be configured to store an internal log of each task that it has checked or received, such that the internal log may be compared to the tasks contained in task queue 210, thereby ensuring that a consistent state is maintained between data provisioning module 206 and task queue 210.

From each received copy of the task, data provisioning module 206 examines the task and analyzes the task to extract information corresponding to the data that will be needed by the worker node to perform the task. In some instances, this information may be extracted from metadata associated with the task or UDF. In some embodiments, this information may be extracted after data provisioning module 206 parses and analyzes the task or UDF itself. As mentioned before, UDFs are typically only provisioned to working nodes that are responsible for performing all necessary data retrieval themselves, usually as an early step in the process of performing UDFs. As such, in some embodiments, the data provisioning module 206 may analyze each UDF to retrieve the necessary information from function calls that each UDF would otherwise make.

Typically, the information specifies one or more locations of the data and portions of the data to retrieve from each location. For example, the location identifier may be provided in the form of a Uniform Resource Identifier (URI). The URI may point to a location internal to the serverless computing environment (i.e., a storage node of the plurality of storage nodes 240, or a worker node of the plurality of worker nodes 230), or may point to a location external to the serverless computing environment (i.e., at some node, server, computing device, database, etc., provided by a third party or unrelated to the functionality of the serverless computing environment). Further, based on various storage schemes (and their associated data replication policies), there may be instances where data needed to perform a given task is stored in multiple locations. In such instances, the location identifier may be provided for one or more qualified locations, either internal or external.

Such a location may be a repository of data to be processed by the UDF, or a library required for UDF execution, both of which may be instances in which the entire content specified by the URI may be downloaded. However, it may often be the case that the URI contains more fragmented data or variant data, only a portion of which is needed by the UDF. Thus, in addition to data locations/URIs, a more general implementation may also include one or more data part identifiers for each URI. For example, each data portion required in a given URI may be specified by: start and end bytes, start and total bytes, total bytes and end bytes, or other such manner of identifying a portion of data in a file system or at a given URI.

Thus, with such characterization information for each received task 204, which is determined as one or more (data location, data portion) or (data location) pairs, data provisioning module 206 may send the characterization information to master node 220. In some embodiments, data provisioning module 206 may block sending a task into task queue 210 until characterization information for the task has been determined. In this way, the following situation can be avoided: wherein a task enters the task queue and is selected by master node 220 for allocation before data provisioning module 206 has sent characterization information for the task to master node 220. For example, data provisioning module 206 may itself receive task 204 from REST API 202, and then send task 204 to task queue 210 and substantially simultaneously send corresponding characterization information for task 204 to master node 220; REST API 202 may receive task 204 and forward task 204 only to data provisioning module 206, data provisioning module 206 will then send a "release" command to REST API 202 at substantially the same time that characterization information is sent to master node 220; or master node 220 may check whether it has received corresponding characterization information for the task before assigning the task.

Regardless of the implementation, it is generally contemplated that master node 220 receives characterization information for a task before allocating one or more worker nodes to perform the task. In contrast to conventional systems that perform only a single step of task provisioning, master node 220 is operable in a new manner to perform a first step of data provisioning and a second step of task provisioning (i.e., assignment) based on the characterization information.

In the data provisioning step, one or more worker nodes are assigned to prefetch data that will be needed to perform a task or UDF. In this manner, it is expected that one or more worker nodes will already have stored copies of the necessary data before master node 220 allocates a given task. Optimally, this data provisioning may be performed in a just-in-time (JIT) fashion, where master node 220 selects a worker node for data provisioning, and then allocates a task for execution just as the worker node completes its pre-fetch of the necessary data. In some embodiments, only idle nodes may be allocated for prefetching data. In some embodiments, an active node with a certain amount of spare capacity may be assigned to use its spare capacity to prefetch data. That is, a worker node may be currently using a first percentage of its computing power to perform one or more tasks, and may use some second percentage of its computing power to retrieve pre-fetch data in parallel with ongoing computations.

Because the characterization information indicates the data and data locations that the worker nodes need to access before executing the assigned task or UDF, master node 220 may perform different degrees of analysis when selecting a worker node from the plurality of worker nodes 230 to prefetch the data. For example, to perform data provisioning for a given task, master node 220 may analyze any of the following criteria: whether a working node is currently available; whether a worker node will soon be available (e.g., expected to be available when a given task reaches the top of the task queue 210); whether a worker node has the computing power required for a given task; whether a worker node currently has at least a portion of the data needed for a given task; proximity of the working node to the nearest location of the task data (e.g., transmission delay, or number of hops); whether a task can be processed in parallel on a plurality of working nodes; and whether other future tasks will require the same data as the given task.

A simple approach may be to simply check whether a worker node is available and eligible (has computing power) to process the task, and then select one or more worker nodes from the pool that will be assigned to prefetch data needed to process the task or UDF. A moderately advanced method can be built on the basis of a simple method and add additional checks as to whether the working node currently has any data stored that is needed to perform the task. A more advanced approach may still be built on the basis of a moderately advanced approach and have master node 220 further calculate the proximity between qualified work nodes and the various sources of data needed to perform the task.

To reduce the complexity of such proximity calculations, master node 220 may construct a network graph or spanning tree (spanning tree) that represents the interconnections between the various worker nodes, storage nodes, and other data sources or URIs, such that the proximity calculations need not be performed from scratch and may instead involve lookups. Alternatively, after performing the proximity calculation, master node 220 may use the newly calculated proximity information to build and store a similar network map in memory, which will reduce the pre-calculation cost.

Independent of the method used, master node 220 must also determine how many worker nodes should be selected to prefetch data for data provisioning for a given task. In some embodiments, a minimum number of nodes required to perform a given task may be provisioned. However, this entails the following risks: when master node 220 actually allocates a task for execution, the provisioned worker node is no longer available. For example, a worker node may have reduced computing power due to a hardware failure or increased computational load from its currently executing task. If either of these conditions occurs, additional worker nodes must be provisioned while the task waits to resume (the same latency issues are typically encountered in conventional serverless computing environments). As such, it may be desirable to provide some number of provisioned working nodes beyond that required by a task or UDF, which effectively act as a buffering or protection means (hedge) against any data-provisioned nodes that become ineligible at the time of task allocation.

Some tasks or UDFs may be compatible with concurrent execution on multiple worker nodes, where such determination of concurrency eligibility is provided by a user during creation of the UDF, determined by data provisioning module 206, or determined by primary node 220. In the case of determining parallelism eligibility, master node 220 may perform additional balancing steps in determining how many work nodes to allocate for parallel tasks. Generally, the more nodes that are assigned to a parallel task, the faster the task can be completed. However, in the context of the present disclosure, there is an inflection point where parallel tasks can be processed in a shorter time than the time required for a participating node to complete its data provision prefetch for an upcoming task. In this way, in addition to any general computational availability limitations that arise when allocating work nodes for parallel processing tasks, an upper limit may be placed on the number of nodes to be allocated such that the involved nodes will all complete their data provision prefetches before completing portions of their parallel processing tasks.

Parallel processing tasks may be further considered by distributing prefetched data to working nodes. A simple approach may simply provision the selected worker node with the entire set of data needed to perform the complete task. However, especially in highly parallel tasks distributed over multiple working nodes, many such data prefetches are unnecessary, since each working node in a parallel task often only needs a portion of the overall task data set. Thus, in some embodiments, master node 220 may pre-provision parallel tasks to multiple work nodes such that each work node will only pre-fetch a portion of the task data that is needed by that work node's portion of the overall task execution. Logic similar to that previously described with respect to backing up the buffering of provisioned nodes may also be provided. In some embodiments, each provisioned worker node may prefetch data sufficient to perform multiple roles in parallel processing, in this way master node 220 may check the current state of the worker nodes (based on received tickets 232) when parallel tasks are to be assigned, and may then perform the final parallel task assignment among the provisioned worker nodes in the most efficient manner based on this current state information and knowledge of which worker nodes can assume which roles.

In some instances, parallel processing-eligible tasks may be divided into multiple parallel tasks, which are then not all executed in parallel. For example, one parallel task may be divided into 100 parallel subtasks, but then processed on only 50 worker nodes. In this manner, master node 220 may perform data provisioning to leverage the following knowledge: after the first 50 subtasks are completed, the necessary prefetch data will continue to be provisioned to each of the 50 worker nodes. For example, the master node may analyze the plurality of tickets 232 to evaluate the status of the plurality of worker nodes 230 as: currently crowded, but may become highly available in the near future after completion of a large currently executing task. Thus, master node 220 may provision an available work node with pre-fetch data corresponding to a smaller subtask of the parallel task and a work node that will become available with pre-fetch data corresponding to a larger subtask of the parallel task.

In other embodiments, the tasks or UDFs in the task queue 210 may be incompatible with parallel processing, but may still have parallel data requirements. That is, task a, task B, and task C may all require the same data for their execution. This may occur when different UDF tests are used or different theories, assumptions, algorithms, etc. are implemented for the same dataset for purposes of comparing UDFs. As with tasks that may be processed in parallel, tasks with parallel data requirements may be processed by master node 220 as a special case. Likewise, the determination of the need for parallel data may be input by a user when generating a UDF, may be determined by data provisioning module 206, may be determined by primary node 220, or any combination of the three. For example, data provisioning module 206 may scan the characterization information generated for each task 204 to identify any similar or overlapping (all or part) data requirements, and update or flag the characterization information to reflect this fact. In practice, after provisioning one or more worker nodes to prefetch data for task A, master node 220 may then assign the same worker nodes to subsequently execute task B and/or task C after task A is completed. Although such task allocation does not follow the conventional first-in-first-out system typically employed by queues, an overall efficiency increase may be achieved by minimizing the number of prefetches performed by individual work nodes.

Such a statement generally applies to any of the particular cases described above, although the use of prefetching reduces latency as compared to conventional approaches without prefetching, repeated or other unnecessary prefetching may increase latency and reduce the benefits provided by the present disclosure. Thus, another factor upon which master node 220 may perform data allocation and task allocation would be to minimize the number of times a given working node refreshes its prefetched data store. In parallel task processing, after completion of a subtask, a worker node may be assigned another subtask of the same complete parallel task. In parallel data demand task processing, after a first task is completed, a second task from the same parallel data demand group may be assigned to a worker node.

Fig. 3 depicts a flow chart 300 of an example method of the present disclosure. In a first step 302, one or more UDFs are received at a REST API or listener layer. For example, one or more UDFs may be received directly from one or more user devices, or may be received indirectly from storage over a network or otherwise.

After receiving one or more UDFs, each UDF is analyzed in step 304. In particular, each UDF is analyzed for characterizing information about the execution or provisioning required to enable execution of the UDF. For example, the characterization information may include the location of the data required for the UDF, and may also include an identification of one or more particular portions of the required data from that location. This characterization information may be parsed from the UDF (e.g., by analyzing function calls that the worker node will perform when initiating execution of the UDF), or may be derived by other analytical means.

In step 306 (which may be performed after steps 302 and 304), the master node or scheduler receives a plurality of work node tickets from one or more work nodes of the system. These work node tickets indicate the resource availability of the respective work nodes (i.e., the work nodes that generated the work node tickets). From these work node tickets, the master node or scheduler is operable to characterize the status of each work node in substantially real time, and thus the entire pool of work nodes in substantially real time. Thus, to maintain up-to-date information, step 306 may be performed continuously, i.e., may be performed concurrently with one or more of the remaining steps seen in flowchart 300, rather than only between steps 304 and 308 as shown. The work node ticket may be generated at regular intervals (e.g., every minute), or in response to detecting a change (e.g., a state change) at the work node, or both. The work node ticket may be generated in a push manner (freely transmitted by the work node to the master node or scheduler) or may be generated in a pull manner (generated by the work node in response to a command from the master node or scheduler).

In step 308, the token information is analyzed along with the work node ticket to determine a set of eligible work nodes that can execute a given UDF corresponding to the token information. In some embodiments, the analysis may be performed by a data provisioning module of the serverless computing environment, a master node or scheduler of the serverless computing environment, or some combination of the two. For example, the data provisioning module may be provided as a module in the same serverless computing environment on which the UDF is executed, such that the data provisioning module comprises hardware elements, software elements, or some combination of both. For example, the data provisioning module may be provided in a serverless computing environment in a manner similar to providing a master node or scheduler. In some embodiments, the data provisioning module may be flexibly redistributed among various hardware elements including the entire serverless computing environment. In some embodiments, the data arranger may be provided by a static set of hardware elements, which may be selected from the hardware elements of the serverless computing environment, or may be provided separately from the hardware elements of the serverless computing environment. Wherever the analysis is performed, it is generally considered that the analysis examines the characterization information of the UDF or the computational requirements indicated by the individual analysis of the UDF, and then analyzes these computational requirements for the computational capabilities of each worker node (as derived from one or more worker tickets received from the worker node). Thus, this step is operable to determine a set of working nodes that have both the original computing power and the available computing power to execute the UDF being analyzed.

In a next step 310, a subset is selected from the set of all working nodes eligible to execute a given UDF. In particular, the subset is selected to contain only those eligible working nodes that are best suited to be prefetched data for which UDF is provisioned. The prefetched data corresponds to data needed to execute the UDF, as indicated by the characterization information obtained in step 304. In some embodiments, the minimum, maximum, or both may be set such that the number of data provisioned nodes meets desired parameters. The node to be provisioned with the prefetch data may be selected according to criteria such as: the distance of the node from the source from which the data was prefetched (e.g., latency, number of hops, etc.), the available storage capacity, the estimated time to complete any current processing task, or other factors as desired.

In this case, the prefetch command may include a URI for the storage node and an indicator of the portion of data to be retrieved from the URI.

In a next step 314, the master node or scheduler may then perform UDF allocation such that the UDF is executed by the serverless computing environment. UDF allocation may occur only on data-provisioned nodes for that UDF, or may also include other working nodes that are not data-provisioned. UDF allocation may be performed or optimized for various parameters (e.g., completion time, desired degree of parallel computation, power consumption, etc.). In the event that one or more working nodes are selected, the primary node or scheduler may then cause the selected one or more working nodes to begin executing the UDF. In case a data provisioned working node is selected, the UDF can start executing immediately, since all necessary data is already stored on the working node. In the case of a selection of a working node that is not provisioned for data, the UDF may pause and not begin execution until the working node has completed locating and retrieving the necessary data.

Fig. 4A and 4B illustrate an example computing system used as a control device in example system embodiments. More suitable embodiments will be apparent to those of ordinary skill in the art when practicing the present technology. One of ordinary skill in the art will also readily appreciate that other system embodiments are possible.

FIG. 4A illustrates a conventional system bus computing system architecture 400 in which components of the system are in electrical communication with each other using a bus 405. The exemplary system 400 includes a processing unit (CPU or processor) 410 and a system bus 405 that couples various system components including a system memory 415, such as a Read Only Memory (ROM)420 and a Random Access Memory (RAM)425, to the processor 410. The system 400 may include a cache directly connected to the processor 410, in close proximity to the processor 410, or integrated as a high-speed memory that is part of the processor 410. System 400 may copy data from memory 415 and/or storage 430 to cache 412 for fast access by processor 410. In this manner, the cache may provide a performance boost to avoid processor 410 delays while waiting for data. These and other modules may control or be configured to control processor 410 to perform various actions. Other system memory 415 may also be available for use. Memory 415 may include a variety of different types of memory having different performance characteristics. The processor 410 may include any general purpose processor and hardware or software modules (e.g., module 1432, module 2434, and module 3436 stored in the storage device 430) configured to control the processor 410 as well as special purpose processors, with software instructions incorporated into the actual processor design. Processor 410 may be essentially an entirely separate computing system containing multiple cores or processors, buses, memory controllers, caches, and the like. The multi-core processor may be symmetric or asymmetric.

To enable a user to interact with computing device 400, input device 445 may represent any number of input mechanisms, such as a microphone for speech, a touch-sensitive screen for gesture or graphical input, a keyboard, a mouse, motion input, speech, and so forth. Output device 435 may also be one or more of a number of output mechanisms known to those skilled in the art. In some cases, the multimodal system may enable a user to provide multiple types of input to communicate with the computing device 400. The communication interface 440 may generally govern and manage user input and system output. There is no restriction on the operation on any particular hardware arrangement, and therefore the essential features herein can be readily replaced with improved hardware or firmware arrangements as they are developed.

The storage device 430 is a non-volatile memory and may be a hard disk or other type of computer-readable medium that can store data that is accessible by a computer, such as magnetic cassettes, flash memory cards, solid state memory devices, digital versatile disks, magnetic cassettes (cartridges), Random Access Memories (RAMs) 425, Read Only Memories (ROMs) 420, and mixtures thereof.

The storage device 430 may include software modules 432, 434, 436 for controlling the processor 410. Other hardware or software modules are contemplated. A memory device 430 may be connected to the system bus 405. In one aspect, the hardware modules performing specific functions may include software components stored in a computer readable medium that perform the functions in conjunction with necessary hardware components (e.g., processor 410, bus 405, display 435, etc.).

Fig. 4B illustrates an example computer system 450 having a chipset architecture that may be used to perform the described methods and generate and display a Graphical User Interface (GUI). Computer system 450 is an example of computer hardware, software, and firmware that can be used to implement the disclosed technology. System 450 may include a processor 455, processor 455 representing any number of physically and/or logically distinct resources capable of implementing software, firmware, and hardware configured to perform the identified calculations. The processor 455 may be in communication with a chipset 460, which chipset 460 may control inputs to the processor 455 and outputs from the processor 455. In this example, chipset 460 outputs information to an output device 465 (e.g., a display) and may read the information and write the information to a storage device 470, which storage device 470 may include, for example, magnetic media and solid state media. The chipset 460 may also read data from and write data to RAM 475. A bridge 480 for interfacing with various user interface components 485 may be provided for interfacing with chipset 460. Such user interface components 485 may include a keyboard, microphone, touch detection and processing circuitry, a pointing device such as a mouse, and the like. In general, the inputs to the system 450 can come from any of a variety of sources, machine-generated and/or human-generated.

Chipset 460 may also interface with one or more communication interfaces 490, which one or more communication interfaces 490 may have different physical interfaces. Such communication interfaces may include interfaces for wired and wireless local area networks, for broadband wireless networks, and personal area networks. Some applications of the methods for generating, displaying, and using the GUIs disclosed herein may include receiving ordered data sets through a physical interface or being generated by the processor 455 by the machine itself through analysis of data stored in storage 470 or 475. Further, the machine may receive inputs from a user via the user interface component 485 and perform appropriate functions, such as browsing functions by interpreting the inputs using the processor 455.

It should be understood that example systems 400 and 450 may have more than one processor 410, or may be part of a group or cluster of computing devices networked together to provide greater processing power.

For clarity of explanation, in some cases the present techniques may be presented as including individual functional blocks including functional blocks that comprise: devices, device components, steps or routines in methods implemented in software or a combination of hardware and software.

In some embodiments, the computer-readable storage devices, media, and memories may comprise wired or wireless signals including bit streams and the like. However, when mentioned, non-transitory computer-readable storage media explicitly exclude media such as energy, carrier signals, electromagnetic waves, and signals per se.

The methods according to the above description may be implemented using computer-executable instructions stored in, or otherwise available from, computer-readable media. Such instructions may include, for example, instructions and data which cause or otherwise configure a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. Portions of the computer resources used may be accessed over a network. The computer-executable instructions may be binary; intermediate format instructions, such as assembly language; firmware; or source code. Computer-readable media that can be used to store instructions, information used, and/or information created during the methods according to the above description include magnetic or optical disks, flash memory, USB devices equipped with non-volatile memory, networked storage devices, and the like.

For clarity of explanation, in some cases the present techniques may be presented as including individual functional blocks including functional blocks that include: devices, device components, steps or routines in methods implemented in software or a combination of hardware and software.

The computer-readable storage devices, media and memories may comprise wired or wireless signals including bit streams and the like. However, when mentioned, non-transitory computer-readable storage media explicitly exclude media such as energy, carrier signals, electromagnetic waves, and signals per se.

Devices implementing methods according to these disclosures may include hardware, firmware, and/or software, and may take any of a variety of form factors. Such form factors include laptops, smart phones, small form factor personal computers, personal digital assistants, rack-mounted devices, stand-alone devices, and the like. The functionality described herein may also be implemented in a peripheral device or add-on card. Such functions may also be implemented on circuit boards in different chips or in different processes executing in a single device.

The instructions, the medium for communicating the instructions, the computing resources for performing them, and other structures for supporting such computing resources are means for providing the functionality described in these disclosures.

Although various information is used to interpret aspects within the scope of the appended claims, no limitation to the claims should be implied based on the particular feature or arrangement, as one of ordinary skill in the art would be able to derive numerous implementations. Furthermore, although some subject matter may have been described in language specific to structural features and/or methodological steps, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the described features or acts. Such functionality may be distributed differently or performed in components other than those identified herein. Rather, the described features and steps are disclosed as possible components of systems and methods within the scope of the appended claims. Furthermore, claim language reciting "at least one" of a collection indicates that one member of the collection, or a plurality of members of the collection, satisfies the claim.

19页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:数据预取方法及装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!