Operation method, device and equipment of reasoning service platform and storage medium

文档序号:1798225 发布日期:2021-11-05 浏览:10次 中文

阅读说明:本技术 推理服务平台的运行方法、装置、设备及存储介质 (Operation method, device and equipment of reasoning service platform and storage medium ) 是由 袁正雄 钱正宇 施恩 胡鸣人 李金麒 褚振方 李润青 黄悦 于 2021-08-04 设计创作,主要内容包括:本公开提供了一种推理服务平台的运行方法、装置、设备及存储介质,涉及人工智能领域,尤其涉及人工智能模型的推理服务领域。具体实现方案为:确定出针对推理服务平台的待分配的推理任务;确定出每个推理服务模组的流量权重,推理服务模组的流量权重用于指示该推理服务模组需要被分配的推理任务数量在推理任务总量中的占比;基于各个推理服务模组的流量权重,将待分配的推理任务中对应数量的推理任务分配给每个推理服务模组;利用推理服务模组执行推理任务。上述方法基于流量权重即可自动为每个推理服务模组分配对应数量的推理任务,大大地减轻了用户为推理服务模组因分配任务而增加的工作量,显著地提升了推理服务的工作效率。(The disclosure provides an operation method, an operation device, an operation equipment and a storage medium of an inference service platform, and relates to the field of artificial intelligence, in particular to the field of inference service of an artificial intelligence model. The specific implementation scheme is as follows: determining an inference task to be distributed aiming at an inference service platform; determining the traffic weight of each inference service module, wherein the traffic weight of the inference service module is used for indicating the proportion of the number of inference tasks needing to be distributed by the inference service module in the total number of the inference tasks; distributing a corresponding number of reasoning tasks in the reasoning tasks to be distributed to each reasoning service module based on the flow weight of each reasoning service module; and executing the reasoning task by using the reasoning service module. The method can automatically distribute the corresponding number of reasoning tasks for each reasoning service module based on the flow weight, greatly reduces the workload of a user for the reasoning service module increased by distributing the tasks, and obviously improves the working efficiency of the reasoning service.)

1. An operation method of an inference service platform comprises the following steps:

determining an inference task to be distributed aiming at an inference service platform, wherein the inference service platform comprises at least two inference service modules, and the versions of the inference service modules are different and are used for executing inference services of the same type;

determining the traffic weight of each inference service module, wherein the traffic weight of the inference service module is used for indicating the proportion of the number of inference tasks needing to be distributed by the inference service module in the total number of the inference tasks;

distributing a corresponding number of reasoning tasks in the reasoning tasks to be distributed to each reasoning service module based on the flow weight of each reasoning service module;

and executing the reasoning task by using a reasoning service module.

2. The method of claim 1, wherein the determining inference tasks to be assigned for an inference service platform comprises:

when the number of tasks in a task queue of the reasoning service platform reaches a preset number, determining the reasoning tasks in the task queue as reasoning tasks to be distributed;

or, determining the inference task stored in the task queue of the inference service platform in the preset time period as the inference task to be allocated every time a preset time period passes.

3. The method of claim 1, wherein said determining a traffic weight for each of said inference service modules comprises:

determining a service scene corresponding to the reasoning service platform;

and determining the flow weight of each inference service module according to the type of the service scene.

4. The method of claim 1, prior to said determining a traffic weight for each of said inference service modules, further comprising:

responding to the weight configuration operation aiming at each inference service module, and configuring the traffic weight of each inference service module;

and associating and recording the identification information of each inference service module and the corresponding flow weight.

5. The method according to any one of claims 1 to 4, wherein the inference service module comprises a plurality of inference service modules, different ones of which are respectively used for executing different subtasks in the inference service;

the inference task is executed by utilizing an inference service module, and the inference task comprises the following steps:

for each of the plurality of inferencing service modules:

receiving data to be processed of a corresponding subtask in the inference service based on the inference service module, wherein the data to be processed is a processing result generated by a previous inference service module adjacent to the inference service module or original data of a first subtask in the inference service;

and calculating a processing result corresponding to the data to be processed based on the reasoning service module, and sending the processing result to the next reasoning service module adjacent to the reasoning service module.

6. The method of claim 5, wherein said sending the processing result to a next inference service module adjacent to the inference service module comprises:

determining a service address of a next inference service module adjacent to the inference service module from the inference service module, wherein the service address of each inference service module of the plurality of inference service modules is stored in other inference service modules in advance;

and sending the processing result to the next inference service module through the service address.

7. The method of claim 6, each of said inference service modules having a first identification indicating an order of arrangement of the corresponding said inference service module;

the service address of each inference service module is generated based on the corresponding first identification of the inference service module.

8. An operating apparatus of an inference service platform, comprising:

the system comprises a task determination module, a task scheduling module and a task scheduling module, wherein the task determination module is used for determining an inference task to be distributed aiming at an inference service platform, the inference service platform comprises at least two inference service modules, and the versions of the inference service modules are different and are used for executing inference services of the same type;

the weight determining module is used for determining the traffic weight of each inference service module, and the traffic weight of each inference service module is used for indicating the proportion of the inference task quantity required to be distributed by the inference service module in the inference task total quantity;

the task allocation module is used for allocating a corresponding number of reasoning tasks in the reasoning tasks to be allocated to each reasoning service module based on the flow weight of each reasoning service module;

and the task execution module is used for executing the reasoning task by utilizing the reasoning service module.

9. The apparatus of claim 8, wherein the task determination module, when configured to determine the inference task to be assigned for the inference service platform, is further configured to:

when the number of tasks in a task queue of the reasoning service platform reaches a preset number, determining the reasoning tasks in the task queue as reasoning tasks to be distributed;

or, determining the inference task stored in the task queue of the inference service platform in the preset time period as the inference task to be allocated every time a preset time period passes.

10. The apparatus of claim 8, wherein the weight determination module, when configured to determine the traffic weight for each inference service module, is further configured to:

determining a service scene corresponding to the reasoning service platform;

and determining the flow weight of each inference service module according to the type of the service scene.

11. The apparatus of claim 8, further comprising a weight configuration module to:

responding to the weight configuration operation aiming at each inference service module, and configuring the traffic weight of each inference service module;

and associating and recording the identification information of each inference service module and the corresponding flow weight.

12. The apparatus according to any one of claims 8 to 11, wherein the inference service module comprises a plurality of inference service modules, different ones of which are respectively used for executing different subtasks in the inference service;

the task execution module is used for executing the reasoning task by utilizing the reasoning service module, and is also used for:

for each inference service module in the plurality of inference service modules, receiving data to be processed of a corresponding subtask in the inference service based on the inference service module, wherein the data to be processed is a processing result generated by a previous inference service module adjacent to the inference service module or original data of a first subtask in the inference service;

and calculating a processing result corresponding to the data to be processed based on the reasoning service module, and sending the processing result to the next reasoning service module adjacent to the reasoning service module.

13. The apparatus of claim 12, wherein the task execution module, when configured to send the processing result to a next inference service module adjacent to the inference service module, is further configured to:

determining a service address of a next inference service module adjacent to the inference service module from the inference service module, wherein the service address of each inference service module of the plurality of inference service modules is stored in other inference service modules in advance;

and sending the processing result to the next inference service module through the service address.

14. The apparatus of claim 13, each of the inference service modules having a first identification indicating an order of arrangement of the corresponding inference service module;

the service address of each inference service module is generated based on the corresponding first identification of the inference service module.

15. An electronic device, comprising:

at least one processor; and

a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,

the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7.

16. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-7.

17. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-7.

Technical Field

The present disclosure relates to the field of artificial intelligence, and in particular, to the field of inference services for artificial intelligence models, and more particularly, to a method, an apparatus, a device, and a storage medium for operating an inference service platform.

Background

With the application of artificial intelligence technology in various industries falling to the ground, complex and various business application scenes provide challenges for artificial intelligence reasoning services, and businesses can require continuous optimization of the effect of artificial intelligence models, so that version iteration of the artificial intelligence models in the production environment is changed more frequently. In the related art, multiple versions of the same artificial intelligence model are applied to multiple independent inference service modules respectively, and a user (e.g., a model developer) needs to distribute multiple inference tasks to inference service modules of different versions by himself, which is high in workload and low in efficiency.

Disclosure of Invention

The disclosure provides an operation method, an operation device, equipment and a storage medium of an inference service platform.

According to an aspect of the present disclosure, there is provided an operation method of an inference service platform, including:

determining an inference task to be distributed aiming at an inference service platform, wherein the inference service platform comprises at least two inference service modules, and the versions of the inference service modules are different and are used for executing inference services of the same type;

determining the traffic weight of each inference service module, wherein the traffic weight of the inference service module is used for indicating the proportion of the number of inference tasks needing to be distributed by the inference service module in the total number of the inference tasks;

distributing a corresponding number of reasoning tasks in the reasoning tasks to be distributed to each reasoning service module based on the flow weight of each reasoning service module;

and executing the reasoning task by using a reasoning service module.

According to another aspect of the present disclosure, there is provided an operating apparatus of an inference service platform, including:

the system comprises a task determination module, a task scheduling module and a task scheduling module, wherein the task determination module is used for determining an inference task to be distributed aiming at an inference service platform, the inference service platform comprises at least two inference service modules, and the versions of the inference service modules are different and are used for executing inference services of the same type;

the weight determining module is used for determining the traffic weight of each inference service module, and the traffic weight of each inference service module is used for indicating the proportion of the inference task quantity required to be distributed by the inference service module in the inference task total quantity;

the task allocation module is used for allocating a corresponding number of reasoning tasks in the reasoning tasks to be allocated to each reasoning service module based on the flow weight of each reasoning service module;

and the task execution module is used for executing the reasoning task by utilizing the reasoning service module.

An evaluation value extraction module, configured to extract an evaluation value of each of a plurality of preset evaluable events from a historical resource produced by a resource producer, where the plurality of evaluable events at least include an evaluable event based on a feature of the historical resource and an evaluable event based on a user feedback of the historical resource;

a target evaluation value calculation module, configured to calculate a target evaluation value of the historical resource based on an evaluation value of each of the preset evaluable events;

and the resource quality determination module is used for determining the quality level of the historical resource based on the target evaluation value.

According to another aspect of the present disclosure, there is provided an electronic device including:

at least one processor; and

a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor, the instructions being executable by the at least one processor to enable the at least one processor to perform the method of operating the inference service platform described above.

According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of operating the inference service platform described above.

According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the method of operation of the inference service platform described above.

It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.

The technical scheme provided by the disclosure has the following beneficial effects:

according to the scheme provided by the embodiment of the disclosure, inference service modules of different versions are deployed in the same inference service platform, and a corresponding number of inference tasks can be automatically allocated to each inference service module based on the flow weight, so that the workload of a user for allocating tasks to the inference service modules is greatly reduced, and the work efficiency of the inference service is remarkably improved.

Drawings

The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:

fig. 1 is a schematic architecture diagram illustrating an operation method of an inference service platform according to an embodiment of the present disclosure;

FIG. 2 is a flow chart diagram illustrating a method for operating an inference service platform according to an embodiment of the present disclosure;

FIG. 3 is a schematic flow chart illustrating an inference service module performing inference tasks according to an embodiment of the present disclosure;

fig. 4 is a schematic structural diagram of an operating apparatus of an inference service platform provided in an embodiment of the present disclosure;

fig. 5 shows a second schematic structural diagram of an operating apparatus of an inference service platform provided in the embodiment of the present disclosure;

FIG. 6 illustrates a schematic block diagram of an example electronic device that can be used to implement a method of operation of an inference service platform of embodiments of the present disclosure.

Detailed Description

Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.

The embodiment of the disclosure provides an operation method, an operation device, an operation apparatus, and a storage medium of an inference service platform, which aim to solve at least one of the above technical problems of the related art.

The embodiment of the disclosure provides an inference service platform, and fig. 1 shows an architecture schematic diagram of an operation method of the inference service platform provided by the embodiment of the disclosure, as shown in fig. 1, the inference service platform adopts a layered structure, and includes a service layer, a version layer, a module layer and an inference instance layer, and association and synchronization between layers are controlled by corresponding modules.

The service layer is used for defining the whole meta information, the whole specification, the real-time state and the like of the reasoning service platform.

Specifically, the whole meta information may include the following information:

service name: a unique identity of the inferencing service platform;

service address: the inference service platform is used for calling an address for external access;

service creation time: the creation time of the inference service platform is represented by a time stamp;

service specification version: the number of times the version of the reasoning service module in the reasoning service platform is modified;

the space to which the service belongs: in a multi-tenant scenario, each tenant corresponds to a space.

The service specification may include the following information:

the routing type is as follows: defining a matching mode of route forwarding, including absolute matching, prefix matching and regular matching;

routing address: defining a back-end address corresponding to the reasoning service platform;

version meta information: meta-information of the inference service module will be discussed in detail at the version level.

The real-time status may include the following information:

service state: representing whether the inference service platform is in a runnable state;

service last invocation time: the reasoning service platform calls the time last time and expresses the time by a time stamp;

service call amount: the number of calls since the inference service platform was created;

service QPS: the current inference service platform processes the number of requests per second.

The version layer under the service layer contains 1 or more versions of reasoning service modules, and the version layer is used for defining meta-information of each version of reasoning service module, and the meta-information contains the name, the traffic weight and the like of the version of the reasoning service module. The routing control module plays a role between the service layer and the version layer and supports the setting of reasoning task distribution strategies for reasoning service modules of multiple versions.

The meta-information of each version of the inference service module may include the following information:

version name: the unique identification of the reasoning service module under the reasoning service platform;

version creation time: the creation time of the inference service platform is represented by a time stamp.

And the flow weight of the inference service platform is as follows: the flow proportion forwarded to the reasoning service module is an integer ranging from 0 to 100, and the sum of the flow weights of the reasoning service modules of all versions is 100, so that the flow weights can indicate the proportion of the number of reasoning tasks to be allocated to the reasoning service module in the total number of the reasoning tasks;

meta-information of inference service module: module level meta-information is discussed in detail at the module level.

The module layer under the version layer comprises a plurality of reasoning service modules in each reasoning service module, and the module layer defines meta-information of each reasoning service module, wherein the meta-information comprises the name of the reasoning service module, the number of reasoning instances contained in the reasoning service module and the like.

Different reasoning service modules in the plurality of reasoning service modules in the same reasoning service module are respectively used for executing different subtasks in the reasoning service. Assuming that the inference service module is used for executing the inference service of identifying the single number in the bill, the inference service module may include 2 inference service modules, one of which is used for identifying the target area containing the single number in the bill, and the other is used for identifying the single number in the target area.

The meta information of the inference service module may include the following information:

module name: the inference service module is a unique identifier under an inference service module of one version;

number of inference instances: the reasoning service module comprises the same number of reasoning instances, and all the reasoning instances of one reasoning service module are the same;

first identification of inference service module: the system comprises a plurality of inference service modules, a first identification and a second identification, wherein the inference service modules are used for indicating the arrangement sequence of the corresponding inference service modules, and the service addresses are generated based on the first identification corresponding to the inference service modules;

specification configuration information of inference example: discussed in detail in the inference example layer.

The reasoning instance layer comprises a plurality of reasoning instances in each reasoning service module, the reasoning instance layer defines a series of specification configuration information required for starting the reasoning instances, and the configuration information comprises deployment package addresses, starting commands, open ports, interface protocols, used computing resources and the like. The inference instance management module matched with the inference instance layer can control the starting mode, the activity detection mechanism and the like of the inference instance process.

In the disclosed embodiment, optionally, each inference instance in an inference service module is completely the same, and is used for executing the same subtask, and the inference instance is encapsulated by a trained model and the running code of the model. For example, the inference service module for identifying the target area containing the single number in the bill may comprise a plurality of inference instances, wherein the inference instances are packaged by a trained model and running code of the model, and the model is used for identifying the target area containing the single number in the bill.

The specification configuration information of the inference instance may include the following information:

name of inference example: a unique identification of the inference instance within the inference service module;

address: reasoning the deployment package address of the instance launch;

a start command: an inference instance initiation command;

environmental variables: reasoning about the environment variables required for instance initiation;

opening a port: port monitored by inference example is used for routing forwarding access;

interface protocol: interface protocols used by inference instances, such as HTTP and TCP;

and (3) activity detection mode and configuration: interface or script configuration of inference instance heuristics;

computing resources: computing resources required by starting the reasoning examples comprise resource information such as a memory, a CPU core number, a GPU computing power, a video memory and the like;

storage resources: external storage required for starting the reasoning instance, including object storage, file storage and the like;

other specification parameters: the method comprises a preparation script before the inference example is started, a retrograding script before the inference example is destroyed, and the like.

Fig. 2 shows a flowchart of an operation method of an inference service platform provided by an embodiment of the present disclosure, and as shown in fig. 2, the method mainly includes the following steps:

s110: and determining the inference task to be distributed aiming at the inference service platform.

In the embodiment of the disclosure, the inference service platform includes at least two inference service modules, and the versions of the inference service modules are different and are used for executing the same type of inference service. For example, the service platform comprises two reasoning service modules, wherein one reasoning service module is a module of an initial version. The other reasoning service module is the latest version, and the reasoning services executed by the two reasoning service modules are both 'single numbers in the identification bill'.

In the embodiment of the disclosure, the inference task sent to the inference service platform may be cached in a task queue, and a corresponding trigger condition may be set to determine the inference task to be allocated from the task queue.

Optionally, when the number of tasks in the task queue of the inference service platform reaches a preset number, determining the inference task in the task queue as the inference task to be allocated. The specific numerical value of the preset number may be determined according to actual requirements, for example, the preset number is 100, and when the number of tasks in the task queue of the inference service platform is 100, the 100 inference tasks are determined as the inference tasks to be allocated.

Optionally, each time a preset time period passes, determining the inference task stored in the task queue of the inference service platform in the preset time period as the inference task to be allocated. The specific numerical value of the preset time period may be determined according to actual requirements, for example, the preset time period is 100 seconds, and each 100 seconds passes, the inference task stored in the task queue of the inference service platform in the 100 seconds is determined as the inference task to be allocated.

S120: and determining the flow weight of each inference service module.

In the embodiment of the disclosure, the traffic weight of the inference service module is used to indicate the proportion of the number of inference tasks that the inference service module needs to be allocated in the total number of inference tasks. Optionally, the traffic weight of each inference service module is any integer between 0 and 100, and the sum of the traffic weights of all versions of inference service modules is 100. For example, the service platform includes two inference service modules, where the traffic weight of one inference service module is 40, and the traffic weight of the other inference service module is 60, and the proportions of the number of inference tasks to be allocated by the two inference service modules in the total number of inference tasks are 0.4 and 0.6, respectively. Optionally, the traffic weight may be configured in advance by a user, or generated randomly, or determined based on parameters (for example, specification configuration information or other specification parameters) of the inference service module, which is not specifically limited in this disclosure.

Preferably, in the embodiment of the present disclosure, the traffic weight of each inference service module may be configured in response to the weight configuration operation for each inference service module; and associating and recording the identification information of each inference service module and the corresponding flow weight. Specifically, the embodiment of the disclosure allows a user to configure the traffic weight of each inference service module, for example, the user may configure the traffic weights of two inference service modules in the inference service platform as 40 and 60, respectively, and associate and record the traffic weight configured for each inference service module by the user and identification information of the inference service module, where the identification information may be a unique identification of the inference service module under the inference service platform. The user can configure the flow weight of each inference service module according to actual needs, so that the inference service platform can quickly meet the real-time requirements of the user.

In the embodiment of the disclosure, a service scene corresponding to the inference service platform is determined, and the traffic weight of each inference service module is determined according to the type of the service scene. The types of the service scenes comprise an effect verification scene of the reasoning service module and a version updating scene of the reasoning service module, and the flow weight of the reasoning service module corresponding to each type of service scene is configured in advance.

Optionally, the user may configure the inference service module traffic weight in the inference service platform separately for each type of service scenario. For example, taking the inference service platform including two inference service modules as an example, for the effect verification scenario of the inference service modules, the user may configure the traffic weights of the two inference service modules in the inference service platform as 80 and 20, respectively, where the traffic weight of the inference service module in the original version is 80, and the traffic weight of the inference service module in the new version is 80; for the version update scenario of the inference service module, the user can configure the traffic weights of the two inference service modules in the inference service platform to 50 and 50, respectively.

S130: and distributing a corresponding number of reasoning tasks in the reasoning tasks to be distributed to each reasoning service module based on the flow weight of each reasoning service module.

As described above, the traffic weight of the inference service module is used to indicate the proportion of the number of inference tasks that the inference service module needs to be allocated to the total number of inference tasks. Optionally, the traffic weight of each inference service module is any integer between 0 and 100, and the sum of the traffic weights of all versions of inference service modules is 100. For example, the service platform includes two inference service modules, where the traffic weight of one inference service module is 40, and the traffic weight of the other inference service module is 60, and the proportions of the number of inference tasks to be allocated by the two inference service modules in the total number of inference tasks are 0.4 and 0.6, respectively. When the total amount of the inference tasks to be allocated is 200, the 200 inference tasks need to be divided into two groups according to a ratio of 2:3 (i.e. 0.4:0.6), wherein one group includes 80 inference tasks, the other group includes 120 inference tasks, the 80 inference tasks are allocated to the inference service module with the flow weight of 40, and the 120 inference tasks are allocated to the inference service module with the flow weight of 60.

S140: and executing the reasoning task by using the reasoning service module.

In the embodiment of the disclosure, the inference service module includes a plurality of inference service modules, and different inference service modules in the plurality of inference service modules are respectively used for executing different subtasks in the inference service. When the inference service module is used for executing the inference task, in particular, different inference service modules in the inference service module are respectively used for executing different subtasks in the inference service.

Assuming that the inference service module is used for executing the inference service of identifying the single number in the bill, the inference service module may include 2 inference service modules, wherein one inference service module is used for identifying the target area containing the single number in the bill, and then another inference service module is used for identifying the single number in the target area.

According to the operation method of the reasoning service platform, the reasoning service modules of different versions are deployed in the same reasoning service platform, and a corresponding number of reasoning tasks can be automatically allocated to each reasoning service module based on the flow weight, so that the workload of a user for allocating the tasks to the reasoning service modules is greatly reduced, and the work efficiency of the reasoning service is remarkably improved.

In addition, the inventors of the present disclosure found that, in the related art, for an inference service module including a plurality of different inference service modules, a plurality of inference service modules in the inference service module need to be arranged based on a DAG (directed acyclic graph), a central scheduling module needs to be arranged in a platform, and subtasks of the plurality of inference service modules are connected in series by using the central scheduling module, which may result in high modification cost for interface adaptation development of the central scheduling module, high system complexity, and increased single-point bearing risk. Based on the above reasons, the present disclosure provides an implementation manner for an inference service module to execute an inference task, and fig. 3 illustrates a schematic flow diagram for the inference service module to execute the inference task, where as shown in fig. 3, the flow mainly includes the following steps:

s210: and receiving the data to be processed of the corresponding subtask in the inference service based on each inference service module in the plurality of inference service modules.

The data to be processed is a processing result generated by a previous inference service module adjacent to the inference service module or original data of a first subtask in the inference service. It can be understood that one inference task may include a plurality of subtasks that need to be completed in sequence, the inference service module includes a plurality of inference service modules, and different inference service modules in the plurality of inference service modules are respectively used for executing different subtasks in the inference service. For an inference service module used for executing a first subtask, the received data to be processed of the subtask is the original data of the first subtask in the inference service; for the inference module executing other subtasks than the first subtask, the received data to be processed of the subtask is the processing result generated by the previous inference service module adjacent to the received data to be processed of the subtask, for example, the data to be processed of the subtask received by the inference service module for executing the second subtask is the processing result generated by the inference service module for executing the first subtask.

S220: and calculating a processing result corresponding to the data to be processed based on the reasoning service module, and sending the processing result to the next reasoning service module adjacent to the reasoning service module.

It can be understood that, for the inference service module for executing the first subtask, after the inference service module calculates the processing result corresponding to the data to be processed of the first subtask, the processing result may be sent to the inference service module for executing the second subtask, and so on. In the embodiment of the present disclosure, the inference service module for executing the first subtask may be used as a module for inputting and outputting data in the inference service module, and for the inference service module for executing the last subtask, after the inference service module calculates a final processing result corresponding to the data to be processed of the last subtask, the final processing result may be sent to the inference service module for executing the first subtask, and the inference service module for executing the first subtask outputs the final processing result to the outside.

Optionally, each inference service module has a first identifier, where the first identifier is used to indicate an arrangement order of the corresponding inference service module, and in the embodiment of the present disclosure, the service address of each inference service module may be generated based on the first identifier of each inference service module, so as to ensure the uniqueness of the service address of each inference service module, and data may be sent to the inference service module through the service address of the inference service module.

In the disclosed embodiment, the service address of each inference service module in the plurality of inference service modules may have been previously stored in other inference service modules. For example, one inference service module includes 5 inference service modules, which are an inference service module a, an inference service module b, an inference service module c, an inference service module d, and an inference service module e. Taking the inference service module a as an example, the service addresses of the inference service module b, the inference service module c, the inference service module d, and the inference service module e can be stored as environment variables in the inference service module a, and the service addresses that need to be stored by other inference modules refer to the inference service module a, which is not described herein again.

In the embodiment of the present disclosure, for an inference service module executing a subtask, a service address of a next inference service module adjacent to the inference service module may be determined from the inference service module, and then a processing result corresponding to the to-be-processed data of the subtask obtained through calculation is sent to the next inference service module through the service address. For example, for the inference service module for executing the first subtask, a service address of the inference service module for executing the second subtask is determined from the inference service module, and after a processing result corresponding to the data to be processed of the first subtask is obtained through calculation, the processing result may be sent to the inference service module for executing the second subtask through the service address.

According to the embodiment of the disclosure, one inference service module can send the processing result to the next inference service module adjacent to the inference service module, that is, the inference service modules can directly communicate with each other to realize model arrangement, so that a central scheduling module can be omitted, and the problems of high reconstruction cost, high system complexity and single-point bearing risk of interface adaptation development in a model arrangement mode based on the central scheduling module are effectively solved.

Based on the same principle as the operation method of the inference service platform, fig. 4 shows one of the schematic structural diagrams of the operation device of the inference service platform provided by the embodiment of the present disclosure, and fig. 5 shows the second of the schematic structural diagrams of the operation device of the inference service platform provided by the embodiment of the present disclosure. As shown in fig. 4, the running device 30 of the inference service platform includes a task determination module 310, a weight determination module 320, a task assignment module 330, and a task execution module 340.

The task determination module 310 is configured to determine an inference task to be allocated for an inference service platform, where the inference service platform includes at least two inference service modules, and each inference service module has a different version and is configured to execute the same type of inference service.

The weight determining module 320 is configured to determine a traffic weight of each inference service module, where the traffic weight of the inference service module is used to indicate a ratio of the number of inference tasks that the inference service module needs to be allocated to the total number of inference tasks.

The task allocation module 330 is configured to allocate a corresponding number of inference tasks in the inference tasks to be allocated to each inference service module based on the traffic weight of each inference service module.

The task execution module 340 is used for executing the inference task by using the inference service module.

According to the operation device of the reasoning service platform, the reasoning service modules of different versions are deployed in the same reasoning service platform, and a corresponding number of reasoning tasks can be automatically allocated to each reasoning service module based on the flow weight, so that the workload of a user for allocating the tasks to the reasoning service modules is greatly reduced, and the work efficiency of the reasoning service is remarkably improved.

In this embodiment, the task determining module 310, when configured to determine the inference task to be assigned for the inference service platform, is further configured to:

determining inference tasks in a task queue as inference tasks to be distributed when the number of tasks in the task queue of the inference service platform reaches a preset number;

or, determining the inference task stored in the task queue of the inference service platform in the preset time period as the inference task to be allocated every time a preset time period passes.

In this embodiment, the weight determining module 320, when configured to determine the traffic weight of each inference service module, is further configured to:

determining a service scene corresponding to the reasoning service platform, wherein the type of the service scene comprises an effect verification scene of the reasoning service module and a version updating scene of the reasoning service module;

and determining the flow weight of each inference service module according to the type of the service scene, wherein the flow weight of each inference service module corresponding to each type of service scene is configured in advance.

In this embodiment, as shown in fig. 5, the running device 30 of the inference service platform further includes a weight configuration module 350, where the weight configuration module 350 is configured to:

configuring a traffic weight of each inference service module in response to a weight configuration operation for each inference service module;

and identifying information and corresponding traffic weight of each inference service module.

In the embodiment of the present disclosure, the inference service module includes a plurality of inference service modules, and different inference service modules in the plurality of inference service modules are respectively used for executing different subtasks in the inference service; the task execution module 340, when configured to execute the inference task using the inference service module, is further configured to:

for each inference service module in the plurality of inference service modules, receiving data to be processed of a corresponding subtask in the inference service based on the inference service module, wherein the data to be processed is a processing result generated by a previous inference service module adjacent to the inference service module or original data of a first subtask in the inference service;

and calculating a processing result corresponding to the data to be processed based on the reasoning service module, and sending the processing result to the next reasoning service module adjacent to the reasoning service module.

In this embodiment, the task execution module 340, when configured to send the processing result to the next inference service module adjacent to the inference service module, is further configured to:

determining a service address of a next inference service module adjacent to the inference service module from the inference service module, wherein the service address of each inference service module of the plurality of inference service modules is stored in other inference service modules in advance;

and sending the processing result to the next inference service module through the service address.

In the embodiment of the invention, each inference service module is provided with a first identifier, and the first identifier is used for indicating the arrangement sequence of the corresponding inference service module;

the service address of each inference service module is generated based on the corresponding first identification of the inference service module.

It can be understood that each module of the operating device of the inference service platform in the embodiment of the present disclosure has a function of implementing the corresponding step of the operating method of the inference service platform. The function can be realized by hardware, and can also be realized by executing corresponding software by hardware. The hardware or software includes one or more modules corresponding to the functions described above. The modules can be software and/or hardware, and each module can be implemented independently or by integrating a plurality of modules. For the functional description of each module of the operation device of the inference service platform, reference may be made to the corresponding description of the operation method of the inference service platform, which is not described herein again.

It should be noted that, in the technical solution of the present disclosure, the acquisition, storage, application, and the like of the personal information of the related user all conform to the regulations of the relevant laws and regulations, and do not violate the customs of the public order.

The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.

FIG. 6 illustrates a schematic block diagram of an example electronic device that can be used to implement a method of operation of an inference service platform of embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.

As shown in fig. 6, the apparatus 500 includes a computing unit 501 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM)502 or a computer program loaded from a storage unit 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data required for the operation of the device 500 can also be stored. The calculation unit 501, the ROM 502, and the RAM 503 are connected to each other by a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.

A number of components in the device 500 are connected to the I/O interface 505, including: an input unit 506 such as a keyboard, a mouse, or the like; an output unit 507 such as various types of displays, speakers, and the like; a storage unit 508, such as a magnetic disk, optical disk, or the like; and a communication unit 509 such as a network card, modem, wireless communication transceiver, etc. The communication unit 509 allows the device 500 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.

The computing unit 501 may be a variety of general-purpose and/or special-purpose processing components having processing and computing capabilities. Some examples of the computing unit 501 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The computing unit 501 performs the various methods and processes described above, such as the method of operation of the inference service platform. For example, in some embodiments, the method of operation of the inference service platform may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 508. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 500 via the ROM 502 and/or the communication unit 509. When loaded into RAM 503 and executed by the computing unit 501, the computer program may perform one or more steps of the method of operation of the inference service platform described above. Alternatively, in other embodiments, the computing unit 501 may be configured by any other suitable means (e.g., by means of firmware) to perform the method of operation of the inference service platform.

Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.

Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.

In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.

To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.

The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.

The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.

It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel or sequentially or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.

The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

17页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:智能网卡系统的安装方法、系统、装置及可读存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!