Information processing apparatus, information processing method, and program

文档序号:1895040 发布日期:2021-11-26 浏览:2次 中文

阅读说明:本技术 信息处理设备、信息处理方法和程序 (Information processing apparatus, information processing method, and program ) 是由 何信莹 于 2020-03-26 设计创作,主要内容包括:本公开涉及使得能够容易地构建多任务处理结构的信息处理设备、信息处理方法和程序。通过组合现有的单任务结构,构造用于多任务处理结构的原型结构,控制目标装置的操作,在单任务结构之间共享、耦接或解耦相似的处理模块和包含多个处理模块的块或在相似的处理模块和块之间共享参数进行优化,从而完成多任务结构。本公开可以应用于使用神经网络的多任务的编程。(The present disclosure relates to an information processing apparatus, an information processing method, and a program that enable a multitasking structure to be easily constructed. By combining existing single-task structures, constructing a prototype structure for a multi-task processing structure, controlling the operation of a target device, and optimizing by sharing, coupling, or decoupling similar processing modules and blocks containing multiple processing modules between the single-task structures or by sharing parameters between similar processing modules and blocks, a multi-task structure is completed. The present disclosure may be applied to multitasking programming using neural networks.)

1. An information processing apparatus comprising:

an optimization unit that optimizes a multitask structure that controls an operation of a target device based on the structure of the multitask structure.

2. The information processing apparatus according to claim 1,

wherein the multitasking structure comprises a plurality of existing single-task structures.

3. The information processing apparatus according to claim 2,

wherein each of said single task structures comprises a processing module or a block comprising a plurality of processing modules, an

The optimization unit optimizes the multitasking structure using the processing modules or the blocks having a similarity between the single-tasking structures.

4. The information processing apparatus according to claim 3,

wherein the optimization unit optimizes the multitask structure by using the processing modules or the blocks having a similarity between the single task structures indicating that the degree of similarity is higher than a predetermined value.

5. The information processing apparatus according to claim 4,

wherein the optimization unit optimizes the multitask structure by using the processing modules or the blocks having an input-output similarity indicating a high degree of similarity of input-output data between the single-task structures.

6. The information processing apparatus according to claim 4,

wherein the optimization unit optimizes the multitask structure by using the processing modules or the blocks having parameter similarity between the single-task structures indicating that the degree of similarity of the used parameters is higher than a predetermined value.

7. The information processing apparatus according to claim 3,

wherein the optimization unit optimizes the multitasking structure by sharing the processing modules or the blocks having a similarity indicating that a degree of similarity is higher than a predetermined value between the single tasking structures.

8. The information processing apparatus according to claim 7,

wherein the optimization unit optimizes the multitasking structure by coupling or decoupling the processing modules or the blocks having a similarity higher than the predetermined value between the multitasking structures to share the processing modules or the blocks.

9. The information processing apparatus according to claim 8,

wherein the optimization unit optimizes the multitasking structure by coupling or decoupling the processing modules or the blocks having a similarity higher than the predetermined value between the multitasking structures according to performance and function of hardware included in the target device to share the processing modules or the blocks.

10. The information processing apparatus according to claim 3,

wherein the optimization unit optimizes the multitask structure by sharing a parameter used in the processing module or the block having a similarity indicating that a degree of similarity is higher than a predetermined value between the single task structures.

11. The information processing apparatus according to claim 1, further comprising:

a simulation unit that performs a simulation in a case where the multitask structure optimized by the optimization unit is operated by the target device;

a score calculation unit that calculates a score indicating a processing accuracy of multitask realized based on the multitask structure by the simulation; and

a relearning unit to relearn the optimized multitask structure based on the score.

12. The information processing apparatus according to claim 11,

wherein the relearning unit updates a parameter of at least one of the processing module and a block containing a plurality of the processing modules included in the optimized multitask structure by relearning using learning data.

13. The information processing apparatus according to claim 12,

wherein the learning data is data according to performance and functions of a sensor and a camera provided in the target device.

14. The information processing apparatus according to claim 13,

wherein the processing module and the block perform predetermined processing based on at least one of a sensing result provided from the sensor, an image provided from the camera, and a processing result of the processing module or the block, and output the processing result.

15. The information processing apparatus according to claim 14,

wherein the multitasking structure is a program using a neural network that controls an operation of the target device.

16. The information processing apparatus according to claim 15,

wherein each of the processing modules and the blocks are layers and blocks in a program using the neural network.

17. The information processing apparatus according to claim 1,

wherein the information processing apparatus is the target device.

18. The information processing apparatus according to claim 1,

wherein the information processing apparatus is a cloud computer.

19. An information processing method, comprising

Optimizing processing of a multitask structure based on a structure of the multitask structure that controls an operation of a target device.

20. A program for causing a computer to function as an optimization unit that optimizes a multitask structure that controls an operation of a target device based on the structure of the multitask structure.

Technical Field

The present disclosure relates to an information processing apparatus, an information processing method, and a program, and more particularly, to an information processing apparatus, an information processing method, and a program that enable a multitask structure to be easily constructed.

Background

A process for simultaneously solving a plurality of tasks such as object recognition, person recognition, path planning, and motion planning is called multitasking.

Therefore, it can be said that a person is always operating while realizing multitasking.

In constructing a program having a multitasking structure for causing a device such as a robot to realize multitasking, a configuration having a configuration in which parallel processing is performed on a program having a desired single-task structure can be simply realized.

However, in the case where multitasking is implemented by simply performing parallel processing on a plurality of single tasks, the same or similar processing may be performed individually in the plurality of single tasks, and even if it is sufficient to perform processing only once, it is possible to unnecessarily repeat the same processing by performing parallel processing, and the processing load may increase.

Therefore, in recent years, a technique for constructing a processing structure that realizes a single task using a neural network has been proposed (see non-patent document 1).

Reference list

Non-patent document

Non-patent document 1: learning Transferable architecture for Scalable Image Recognition

Disclosure of Invention

The problems to be solved by the invention are as follows:

however, even though the processing amount is enormous in the structure of the search single task of non-patent document 1, when a similar structure search is applied to realize multitasking, it is expected that collection, labeling, learning, evaluation, and the like of learning data becomes more difficult.

As described above, the man-hours for constructing the multitask structure are huge and much time is required for construction, compared with the case where the man-hours for constructing the single task structure are simply multiplied by the number of types of the single tasks.

The present disclosure has been made in view of such circumstances, and in particular, enables a multitask structure to be easily constructed.

Means for solving the problems

An information processing apparatus according to an aspect of the present disclosure is an information processing apparatus including an optimization unit that optimizes a multitask structure based on a structure of the multitask structure that controls an operation of a target device.

An information processing method and a program according to an aspect of the present disclosure correspond to an information processing apparatus.

In one aspect of the present disclosure, a multitask structure is optimized based on a structure of the multitask structure that controls an operation of a target device.

Drawings

Fig. 1 is a diagram for explaining a multitask structure realized by parallel processing of a single task structure.

Fig. 2 is a diagram for explaining an example of sharing parameters between processing modules that perform similar processing.

Fig. 3 is a diagram for explaining an example of sharing parameters between a processing module and a block that performs similar processing.

Fig. 4 is a diagram for explaining an example of sharing processing modules for performing similar processing between single task structures.

Fig. 5 is a diagram for explaining a configuration example of an information system of the present disclosure.

Fig. 6 is a flowchart for explaining the multitask execution processing of the information processing system of fig. 5.

Fig. 7 is a diagram for explaining a configuration example of the multitask structure generating unit in fig. 5.

Fig. 8 is a diagram for explaining a configuration example of the structure search unit of fig. 7.

Fig. 9 is a flowchart illustrating a prototype structure search process.

Fig. 10 is a flowchart for explaining the input similarity and output similarity comparison structure search processing in fig. 9.

Fig. 11 is a diagram for explaining an example of parallel processing of a plurality of processing modules.

Fig. 12 is a flowchart for explaining the parameter similarity comparison structure search processing in fig. 9.

Fig. 13 is a flowchart illustrating a prototype structure search process of application example 1.

Fig. 14 is a flowchart for explaining the multitask execution processing of application example 2.

Fig. 15 is a flowchart for explaining the multitask execution processing of application example 3.

Fig. 16 is a diagram for explaining a configuration example of a general-purpose personal computer.

Detailed Description

Preferred embodiments of the present invention will be described in detail below with reference to the accompanying drawings. Note that in this specification and the drawings, constituent elements having substantially the same functional configuration are given the same reference numerals, and redundant description is omitted.

Hereinafter, embodiments for implementing the present technology will be described. The description will be given in the following order.

1. Summary of the disclosure

2. Embodiments of the present disclosure

3. Application example 1

4. Application example 2

5. Application example 3

6. Software execution examples

<1. summary of the present disclosure >

The present disclosure enables a multitasking structure to be easily constructed.

First, an overview of the present disclosure will be described.

As shown in fig. 1, a case will be described in which a moving object R is set as a target device, the moving object R including a robot or the like including three types of sensors J1 to J3, a sensor J1 including a human sensor, a microphone, an illumination sensor, a distance measurement sensor, and the like, a sensor J2 including a simultaneous localization and mapping (SLAM) camera, and a sensor J3 including a Red Green Blue (RGB) camera, a time of flight (ToF) sensor, and the like.

Further, as shown in fig. 1, a case will be considered in which a multitask structure in which three types of existing single tasks T1 to T3 of object recognition, map construction, path planning, and motion control are simultaneously performed, is constructed by a neural network, is provided to a target device including a moving object R, and multitasking is implemented by the moving object R.

Note that, hereinafter, the neural network is also simply referred to as NN.

In fig. 1, the processing structure of the single task T1 for realizing the object recognition processing includes, for example, processing modules M1 to M3 corresponding to the layers in the NN.

Here, the processing module M1 performs a predetermined process based on the sensing result of the sensor J3, and outputs the execution result to the processing module M2.

The processing module M2 performs predetermined processing based on the processing result of the processing module M1, and outputs the processing result to the processing module M3.

The processing module M3 performs predetermined processing based on the processing result of the processing module M2, and outputs the processing result as an object recognition result.

As described above, the processing structure of the single task T1 related to the object recognition includes the processing modules M1 to M3, and the object recognition based on the sensing result of the sensor J3 is realized by the single task T1.

Further, the processing structure of the single task T2 for implementing map construction includes, for example, processing modules M11 to M14 corresponding to the layers in the NN.

Here, the processing module M11 performs predetermined processing based on the sensing results of the sensors J2, J3, and outputs the execution results to the processing modules Ml2, Ml 3.

The processing module M12 performs predetermined processing based on the processing result of the processing module M11, and outputs the processing result to the processing module M13.

The processing module M13 performs predetermined processing based on the processing results of the processing modules M11, M12, and outputs the processing results to the processing module M14.

The processing module M14 performs predetermined processing based on the processing result of the processing module M13, and outputs the processing result as a map construction result.

As described above, the processing structure of the single task T2 related to map construction includes the processing modules M11 to M14, and map construction is realized by the single task T2 based on the sensing results of the sensors J2, J3.

Similarly, the processing structure of the single task T3 for implementing path planning (movement path planning) and motion control includes, for example, processing modules M21 to M26 corresponding to layers in the NN.

Here, the processing module M21 performs predetermined processing based on the sensing results of the sensors Jl and J3, and outputs the execution results to the processing modules M22, M25.

The processing module M22 performs predetermined processing based on the processing result of the processing module M21, and outputs the processing result to the processing module M23.

The processing module M23 performs predetermined processing based on the processing result of the processing module M22, and outputs the processing result to the processing module M24.

The processing module M25 performs predetermined processing based on the processing result of the processing module M21, and outputs the processing result to the processing module M26.

The processing module M26 performs predetermined processing based on the processing result of the processing module M25, and outputs the processing result to the processing module M24.

The processing module M24 performs predetermined processing based on the processing results of the processing modules M23, M26, and outputs the processing results as path planning and motion control results.

As described above, the processing structure of the single task T3 related to path planning and motion control includes the processing modules M21 to M26, and path planning and motion control are performed by the single task T3 based on the sensing results of the sensors J1, J3.

Parallel processing is performed on the single tasks T1 through T3 used by the existing hardware in fig. 1, thereby realizing multitasking.

However, since the single tasks T1 to T3 are used by different existing hardware, optimization between the included processing modules is not performed.

Therefore, for example, in the case where there are processing modules that perform similar processing or blocks containing a plurality of processing modules that perform similar processing among the processing modules M1 to M3, M11 to M14, and M21 to M26, there is a possibility that unnecessary overhead of memory copy occurs or processing time increases due to the repeated overlapping processing.

Therefore, in the present disclosure, an existing single task structure is used to construct a prototype structure of a multitask structure, the prototype structure is searched, the structure of a processing module and the blocks of the processing module are analyzed, and the processing module and the blocks to be subjected to overlapping processing are optimized by sharing the processing module and the blocks, sharing parameters corresponding to hyper-parameters used in NN, coupling or decoupling the processing module and the blocks, and the like, and an efficient multitask structure can be easily constructed.

That is, more specifically, for example, in the case where the process module M2 in the single task T1 and the process module M12 in the single task T2 in fig. 1 are process modules that perform similar or identical processes, as shown in fig. 2, the process module M12 is regarded as a process module M2, and the parameter HP1 to be used is shared by the process modules M2, M12 (M2).

Further, in the case where the process module M3 in the single task T1 and the process module M14 in the single task T2 of fig. 1 are process modules that perform similar or identical processes, as shown in fig. 2, the process module M14 is regarded as a process module M3, and the parameter HP2 to be used is shared by the process modules M3, M14 (M3).

By sharing this parameter among other similar processing modules, unnecessary overhead of memory copy and consumption of memory itself can be reduced, and processing time can be reduced by reducing the frequency of use of memory.

Further, for example, in the case where the blocks of the process module M2 in the single task T1 and the process modules M11 to M13 included in the single task T2 of fig. 1 are similar to each other or the process modules and blocks that perform the same process, as shown in fig. 3, the process modules M11 to M13 are coupled to be regarded as the process modules M2', and share the parameter HP11 to be used.

Further, in the case where the process module M3 in the single task T1 and the process module M14 in the single task T2 of fig. 1 are process modules that perform similar or identical processes, as shown in fig. 3, the process module M14 is regarded as a process module M3, and the parameter HP12 to be used is shared by the process modules M3, M14 (M3).

As described above, in the case where there is a processing module that implements processing similar to or the same as that of a block including a plurality of processing modules, the plurality of processing modules included in the block are coupled and regarded as one processing module, and furthermore, parameters are shared between the processing modules implementing similar processing.

Therefore, unnecessary overhead of memory copy and consumption of the memory itself can be reduced, and processing time can be reduced by reducing the use frequency of the memory.

Further, for example, in the case where the process module M2 in the single task T1 and the process module M12 in the single task T2 in fig. 1 are process modules that perform the same process, as shown in fig. 4, the process module M2 is shared with the process module M12, and in the process module M13, the output result of the process module M2 is used (input) as it is.

Further, in the case where the process module M13 in the single task T2 and the process module M22 in the single task T3 of fig. 1 are process modules that perform the same process, as shown in fig. 4, the process module M13 is shared with the process module M22, and in the process module M23, the output result of the process module M13 is used (input) as it is.

In the case where there are processing modules that realize the same processing as described above, the processing modules that realize the same processing are shared and used, and the processing results of the shared processing modules are used for the subsequent processing as they are.

Therefore, unnecessary overhead of memory copy and consumption of the memory itself can be reduced, and processing time can be reduced by sharing the processing module.

As described above, in the present disclosure, in constructing a multitasking structure, a prototype structure in which an existing single-task structure is subjected to parallel processing is formed, the same or similar processing modules are searched for among the processing modules included in the single-task structure in the prototype structure, and the searched same or similar processing modules share parameters, or share the processing modules, or the processing modules are coupled or decoupled.

That is, in the present disclosure, a prototype structure, which is a structure in which a plurality of single task structures are subjected to only parallel processing, is generated, and processing modules included in the plurality of single task structures in the prototype structure are optimized based on the search result of the prototype structure, so that a multitask structure can be easily constructed.

<2. embodiments of the present disclosure >

Next, a configuration example of the information processing system of the present disclosure will be described with reference to fig. 5.

The information processing system in fig. 5 includes a multitask structure generating unit 11 and a target device 12, and the multitask structure generating unit 11 includes a Personal Computer (PC).

The multitask structure generating unit 11 constructs a multitask structure including a Neural Network (NN) for implementing multitask to be performed by the target device 12 such as a robot or a moving object including various sensors, and the multitask structure generating unit 11 provides the multitask structure to the target device 12 and implements multitask.

More specifically, the multitask structure generating unit 11 acquires individual single task structures included in multitasks to be realized by the target device 12, and constructs a prototype structure of the multitask structure that simply undergoes parallel processing. Then, the multitask structure generating unit 11 searches for the same or similar processing modules between the plurality of single tasks contained in the prototype structure.

Here, the processing module has a configuration corresponding to each layer such as an input layer, a hidden layer, and an output layer in the NN, and is a module for explaining a program to be executed.

Then, the multitask structure generating unit 11 optimizes the same or similar processing modules or blocks between the single task structures included in the prototype structure by sharing parameters or processing modules between the plurality of single task structures or coupling or decoupling the processing modules, as described with reference to fig. 2 to 4, and reconstructs the structure into a multitask structure.

At this time, if necessary, the multitask structure generating unit 11 relearns the reconstructed multitask structure using the learning data and updates the parameters used in the processing modules included in the multitask structure.

The target device 12 is, for example, a device such as a robot or a moving object, which includes various sensors and cameras such as a human sensor, a microphone, an illumination sensor, a distance measurement sensor, a SLAM camera, an RGB camera, and a ToF sensor.

The target device 12 acquires NN as a multitask structure for implementing multitask supplied from the multitask structure generating unit 11, and realizes multitask based on detection results of various sensors and cameras.

< multitask execution processing >

Next, a multitask execution process for causing the target device 12 in the information processing system of fig. 5 to realize multitasking will be described with reference to the flowchart of fig. 6.

In step S11, the user requests learning data of one shot or several shots from a sensor or a camera having performance equivalent to that of various sensors or cameras provided in the target device 12.

In step S21, the sensor or camera having the performance equivalent to the performance of the various sensors or cameras provided in the target device 12 returns the learning data of one shot or several shots to the user.

Here, the learning data photographed once or several times is learning data photographed once or several times required for machine learning or the like.

Note that the learning data may be learning data more than one shot or several shots. However, by using one shot or several shots, it is possible to obtain a parameter (coefficient) that can realize multitasking that is not highly accurate but can withstand actual use while reducing the processing load and processing time related to learning.

In step S12, the user acquires learning data of one shot or several shots provided from sensors or cameras having performances equivalent to those of various sensors or cameras provided in the target device 12.

Note that the learning data of one shot or several shots may be directly requested from the target apparatus 12.

In this case, in step S40, the target device 12 returns learning data of one shot or several shots with respect to a sensor, a camera, or the like provided by itself to the user.

Note that here, a description will be given assuming that learning data for one shot or several shots is provided from a sensor or a camera equivalent to the target device 12 provided separately from the target device 12 by the processing of step S21.

In step S13, the user supplies the acquired learning data of one shot or several shots to the multitask structure generating unit 11 including the PC.

In step S31, the multitask structure generating unit 11 acquires learning data of one shot or several shots.

In step S32, the multitask structure generating unit 11 constructs a prototype structure of the multitask structure, which simply performs parallel processing on the existing single task structure, as the multitask structure for realizing multitask to be executed on the target device 12.

However, as described with reference to fig. 1, the prototype structure of the multitask structure constructed by the processing in step S32 is an overlapped structure including processing modules and the like, and there is a possibility that the processing load is large and the processing time is long, so that the prototype structure is not an optimized multitask structure.

In step S33, referring to the flowchart of fig. 9, the multitask structure generating unit 11 performs a prototype structure search process as described later, searches for similar processing modules, and optimizes the multitask structure including the prototype structure through processes such as parameter sharing, processing module sharing, and coupling or decoupling of the processing modules, thereby achieving higher processing efficiency and constructing the multitask structure.

In step S34, the multitask structure generating unit 11 supplies the completed multitask structure to the target device 12.

In step S41, the target device 12 acquires the multitask structure supplied from the multitask structure generating unit 11 and executes multitask.

Through the above-described processing, a multitask structure in an optimized state is constructed, so that optimal multitasking is realized by the target device 12.

As a result, the overhead of the memory and the processing load can be reduced, and the parameters of the processing modules and blocks and the processing modules and blocks themselves are shared, thereby optimizing the multitasking structure, so that the processing load can be reduced and the processing speed can be increased.

< example of configuration of multitask structure creation Unit >

Next, a configuration example of the multitask structure generating unit 11 including the PC will be described with reference to the block diagram of fig. 7.

More specifically, the multitask structure generating unit 11 includes a control unit 31, an input unit 32, an output unit 33, a storage unit 34, a communication unit 35, a drive 36 and a removable storage medium 37, which are electrically connected to each other via a bus 38.

The control unit 31 contains a processor and a memory, and controls the entire operation of the multitask structure generating unit 11.

Further, the control unit 31 includes a structure search unit 51 that performs a prototype structure search process as described below, and constructs a multitask structure for causing the target device 12 to perform multitasking using an existing single task structure.

Note that details of the structure search unit 51 will be described later with reference to fig. 8.

The input unit 32 includes a keyboard, operation buttons, and the like, receives an operation input by a user, and outputs the operation input to the control unit 31.

The output unit 33 includes a display unit that displays an image, for example, a display including a Liquid Crystal Display (LCD), an organic Electroluminescence (EL), or the like, and a sound output unit including a speaker that outputs a sound, and outputs an image and a sound as necessary.

The storage unit 34 is controlled by the control unit 31, includes a Hard Disk Drive (HDD), a Solid State Drive (SSD), a semiconductor memory, and the like, and writes or reads various data and programs.

Further, as described with reference to fig. 6, the storage unit 34 stores learning data for one shot or several shots by the sensor or camera provided in the target device 12, or a sensor or camera having equivalent performance to the sensor or camera provided in the target device 12, or a single task structure (corresponding NN) for existing hardware.

The communication unit 35 is controlled by the control unit 31, communicates with the target device 12 via a communication network represented by a Local Area Network (LAN) or the like in a wired (or wireless (not shown)) manner, and transmits and receives (NN corresponds to) the generated multitask structure.

The drive 36 reads and writes data to a removable storage medium 37 such as a magnetic disk (including a flexible disk), an optical disk (including a compact disk-read only memory (CD-ROM) and a Digital Versatile Disk (DVD)), a magneto-optical disk (including a Mini Disk (MD)), or a semiconductor memory.

< configuration example of Structure search Unit >

Next, a configuration example of the structure search unit 51 will be described with reference to fig. 8.

The structure search unit 51 includes a single-tasking structure acquisition unit 71, a similarity comparison unit 72, a coupling and decoupling unit 73, a simulation unit 74, an accuracy check unit 75, a score calculation unit 76, a score determination unit 77, a learning determination unit 78, a learning unit 79, a multitasking structure storage unit 80, and a multitasking structure output unit 81.

The single task structure obtaining unit 71 reads a necessary existing single task structure (corresponding to NN) from, for example, the storage unit 34 according to the multitask to be executed by the target device 12, and constructs a prototype structure of the multitask structure.

That is, for example, as described with reference to fig. 1, in the case of constructing (NN includes) a multitask structure including three types of tasks of object recognition, map construction, path planning, and motion control, existing single tasks T1 through T3 in fig. 1 are read, and a prototype structure of the multitask structure as shown in fig. 1 is constructed.

In this case, each of the plurality of single task structures included in the prototype structure of the multitasking structure may contain a processing module or block that performs the same or similar processing, there is a case where overlapping processing is repeated or processing waste such as reading the same parameter is contained repeatedly, and the multitasking structure is not a complete (optimized) multitasking structure.

The similarity comparison unit 72 compares the similarity of the processing modules or blocks containing a plurality of processing modules in the single-tasking processing structure included in the prototype structure of the multitasking structure, determines whether the similarity is higher than a predetermined threshold, and outputs the result to the coupling and decoupling unit 73 in the case where the similarity is higher than the predetermined threshold, in order to optimize the multitasking.

Here, there are two types of similarity: input/output similarity, which is the similarity of input/output data in units of processing modules or blocks; and a parameter similarity that is a similarity of parameters corresponding to the hyper-parameters in the NN used in units of processing modules or blocks.

The similarity comparison unit 72 determines whether there is a similarity for each of the input-output similarity and the parameter similarity for the processing module or block.

Note that in the case where the similarity is lower than the predetermined threshold and is not similar, the similarity comparison unit 72 cannot optimize multitasking, and thus stores the processing module or block as a completed processing module in the multitasking structure storage unit 80.

The coupling and decoupling unit 73 is arranged to share processing modules or parameters among blocks having a high input-output similarity or parameter similarity between the single tasks NN, or to share the processing modules or blocks themselves, and if necessary, to couple or decouple the processing modules or blocks to optimize the multitask structure as a prototype structure, thereby constructing a complete new multitask structure.

The coupling and decoupling unit 73 then outputs the optimized and newly constructed multitask structure to the simulation unit 74.

That is, as described with reference to fig. 2, in the case where the process module M2 and the process module M12 are similar to each other, the coupling and decoupling unit 73 sets parameters to be shared by the process module M2 and the process module M12.

Further, as described with reference to fig. 3, when the input-output similarity and the parameter similarity between the process module M2 and the block including the process modules M11 to M13 are similar, the coupling and decoupling unit 73 couples the process modules M11 to M13 to construct a process module M2', and sets a parameter to be shared with the process module M2.

Further, as described with reference to fig. 4, in the case where the input-output similarity and the parameter similarity between the processing modules M2, M12 are similar, the coupling and decoupling unit 73 shares the processing module at the subsequent stage of the processing module M12 to input the output result of the processing module M2 to the processing module M13, and rearranges the connection configuration.

When the target device 12 is operated by the newly constructed optimized multitask structure (including NN), the simulation unit 74 performs simulation and outputs the simulation processing result to the accuracy check unit 75.

The accuracy checking unit 75 checks the processing accuracy of the newly constructed optimized multitask structure from the simulation processing result of the newly constructed multitask structure, and outputs information of the checked processing accuracy to the score calculating unit 76.

The score calculating unit 76 calculates a score related to the processing accuracy of the newly constructed optimized multitask structure based on the processing accuracy of the simulation processing result, and outputs the score to the score determining unit 77.

The score determining unit 77 compares the score related to the processing accuracy of the (NN-containing) newly constructed optimized multitask structure with a predetermined threshold to determine whether the newly constructed multitask structure is usable, and outputs the usable multitask structure to the learning determining unit 78 together with the score.

The learning determination unit 78 determines whether a newly constructed optimized multi-tasking structure considered to be an available multi-tasking structure needs to be relearned.

For example, in the case where the score relating to the processing accuracy is higher than the predetermined threshold that can be used but insufficient processing accuracy, it is considered that relearning is necessary to improve the processing accuracy, and the determination result is output to the learning unit 79.

Further, for example, in the case where the score relating to the processing accuracy is sufficient processing accuracy, relearning is not necessary for improving the processing accuracy, and therefore, the determination result is output to the learning unit 79, and the newly constructed available multitask structure and the score are output to the multitask structure storage unit 80 in association with each other.

The learning unit 79 causes the newly constructed optimized multitask structure to be learned based on the learning data, updates parameters used by the various processing modules, and stores the updated parameters in the multitask structure storage unit 80.

As described above, in the multitask structure storing unit 80, similar processing modules or blocks having an input-output similarity or a parameter similarity higher than a predetermined threshold are shared as necessary, parameters are shared, or reconfiguration is made by coupling or decoupling, whereby a newly constructed optimized multitask structure is stored together with scores.

In other words, in the multitask structure storage unit 80, a newly constructed optimized multitask structure in which the shared processing modules and blocks, the shared parameters, the connection style of the processing modules reconfigured by coupling or decoupling, and the like are variously different is stored together with the score.

The multitask structure output unit 81 outputs the optimized newly constructed multitask structure having the highest score among various newly constructed multitask structures stored in the multitask structure storage unit 80 to the target device 12, and causes the target device 12 to execute multitask.

< prototype Structure search processing >

Next, the prototype-structure search process will be described with reference to the flowchart of fig. 9.

In step S111, the structure search unit 51 performs an input-output similarity comparison structure search process, searches for processing modules or blocks having high input-output similarities between the single task structures included in the prototype structure of the multitask, and optimizes and constructs a new multitask structure based on the search result. Then, the structure search unit 51 stores the constructed multitask structure and the score as an evaluation of the processing accuracy.

Note that the input-output similarity comparison structure search process will be described in detail later with reference to the flowchart of fig. 10.

In step S112, the structure search unit 51 performs a parameter similarity comparison structure search process, searches for processing modules or blocks having high parameter similarity between the single task structures included in the multi-tasking prototype structure, and optimizes and constructs a new multi-tasking structure based on the search result. Then, the structure search unit 51 stores the constructed multitask structure and the score as an evaluation of the processing accuracy.

Note that the parameter similarity comparison structure search process will be described in detail later with reference to the flowchart of fig. 12.

Through the above processing, a multitask structure is newly constructed based on the input-output similarity and the parameter similarity, and the newly constructed multitask structure is stored together with the score.

In the multitasking structure configured as described above, the structure having the highest score is selected, and multitasking is performed by the target device.

Note that although the example in which the parameter similarity comparison structure search process is executed after the input-output similarity comparison structure search process is executed has been described above, the order of the processes may be interchanged. In addition, in the above description, an example of performing the input-output similarity comparison structure search processing and the parameter similarity comparison structure search processing is described. However, only one of the input-output similarity comparison structure search process and the parameter similarity comparison structure search process may be performed.

< input/output similarity comparison Structure search processing >

Next, the input-output similarity comparison structure search process will be described with reference to the flowchart of fig. 10.

In step S131, the single task structure obtaining unit 71 reads a necessary existing single task structure from, for example, the storage unit 34, and constructs a prototype structure of the multi-task structure, according to the multi-task to be executed by the target device 12.

In step S132, the similarity comparing unit 72 sets an unprocessed combination among combinations between processing modules included in a single task structure or blocks including a plurality of processing modules included in a prototype structure of a multitask structure as a processing target combination.

In step S133, the similarity comparison unit 72 calculates the input-output similarity of a combination between the processing modules as the processing target combination or between blocks containing a plurality of processing modules.

In step S134, the similarity comparison unit 72 determines whether the input-output similarity is higher than a predetermined threshold.

In step S134, in the case where the input-output similarity is higher than the predetermined threshold, the similarity comparison unit 72 outputs the input-output similarity to the coupling and decoupling unit 73 so as to optimize multitasking.

In step S135, the coupling and decoupling unit 73 couples or decouples the processing modules or blocks as necessary to share the processing modules or blocks between the single task structures having high input-output similarity, thereby constructing a new multitask structure.

The coupling and decoupling unit 73 then outputs the newly constructed multitask structure to the simulation unit 74.

That is, as described with reference to fig. 2 to 4, the coupling and decoupling unit 73 couples or decouples the processing modules or blocks as necessary in order to share the processing modules or blocks having high input-output similarity.

Further, for example, as shown in fig. 11, the processing modules may be coupled or decoupled such that data processed in a time series may be efficiently processed in parallel.

That is, fig. 11 shows a processing structure in which parallel processing is realized by four processing modules, the parallel processing including processing a to processing D of data D1 to D4 (shown separately in different styles in the figure).

With this processing structure, parallel processing of four clock cycles of four processing modules can be realized.

In other words, in clock cycle 0, the pieces of data D1 to D4 are set as wait instructions.

Then, the data D1 to D4 sequentially undergo parallel processing by the processing a to the processing D every clock cycle.

That is, in clock cycle 1 as the next timing, the data D2 to D4 are set as wait instructions, and the process a is performed on the data D1.

In clock cycle 2, the data D3, D4 are set as a wait instruction, processing b is performed on the data D1, processing a is performed on the data D2, and parallel processing is performed.

In clock cycle 3, data D4 is set as a wait instruction, processing c is performed on data D1, processing b is performed on data D2, processing a is performed on data D3, and parallel processing is performed.

In clock cycle 4, processing D is performed on data D1, processing c is performed on data D2, processing b is performed on data D3, processing a is performed on data D4, and parallel processing is performed.

In clock cycle 5, the data D1 is set as a completion instruction, the process D is performed on the data D2, the process c is performed on the data D3, the process b is performed on the data D4, and parallel processing is performed.

In clock cycle 6, the data D1, D2 are set as the completion instruction, the processing D is performed on the data D3, the processing c is performed on the data D4, and parallel processing is performed.

In clock cycle 7, the data D1 to D3 are set as the completion instruction, the processing D is performed on the data D4, and parallel processing is performed.

In clock cycle 8, pieces of data D1 through D4 are set as completing instructions.

Here, for example, in the case where the process a is a read process, the process b is a decode process, the process c is an execution process, and the process D is a write-back process, parallel processes may be executed on the pieces of data D1 to D4 in clock cycles 0 to 8 based on the hardware configuration and the memory bandwidth of the target device 12.

Further, the processing modules may be coupled or decoupled by hardware processing capabilities. That is, in the case where the process a takes twice the processing time of the processes b to d, the processing modules of the process a may be decoupled into two to be equal to the processing time of the processes b to d, and the processing times of all the processing modules may be equalized, according to the performance or processing capability of hardware.

In step S136, the simulation unit 74 performs simulation by the newly constructed multitask structure with the target device 12 operating with the computing power and the storage bandwidth, and outputs the simulation processing result to the accuracy check unit 75.

In step S137, the accuracy check unit 75 calculates the processing accuracy of the newly constructed multitask structure from the simulation processing result of the newly constructed multitask structure, and outputs information of the calculated processing accuracy to the score calculation unit 76.

In step S138, the score calculation unit 76 calculates a score related to the processing accuracy of the newly constructed multitask structure based on the processing accuracy of the simulation processing result, and outputs the score to the score determination unit 77.

In step S139, the score determining unit 77 determines whether the newly constructed multitask structure is optimized by comparing the score related to the processing accuracy of the newly constructed multitask structure with a predetermined threshold value.

In the case where the score relating to the processing accuracy is higher than the predetermined threshold value and the newly constructed multitask structure is considered to be optimized in step S139, the score determining unit 77 outputs the multitask structure to the learning determining unit 78 together with the score.

In step S140, the learning determination unit 78 determines whether newly constructed multitask structures considered as optimized multitask structures need to be relearned.

In step S140, for example, in a case where the score related to the processing accuracy of the newly constructed multitask structure is considered to be optimized to be higher than the predetermined score but the processing accuracy is insufficient, the learning determination unit 78 considers that relearning is necessary in order to improve the processing accuracy, and outputs the newly constructed multitask structure to the learning unit 79.

In step S141, the learning unit 79 relearns the newly constructed multitask structure based on the learning data.

In step S142, the learning unit 79 reconfigures and updates the parameters used by the respective processing modules and relearns.

Then, in step S143, the learning unit 79 stores the reconfigured multitask structure in the multitask structure storage unit 80 in association with the score.

In step S144, the similarity comparing unit 72 determines whether there is an unprocessed combination among combinations between processing modules included in a single task structure or blocks including a plurality of processing modules included in a prototype structure of the multitask structure, and in the case where there is an unprocessed combination, the process returns to step S132.

That is, the processes of steps S132 to S144 are repeated until the optimization process based on the input-output similarity is performed on all combinations between the processing modules in the single task structure or between the blocks including a plurality of processing modules included in the prototype structure of the multitask structure.

Then, in step S144, in a case where it is considered that the optimization processing has been performed for all combinations between the processing modules included in the single task structure or between the blocks including a plurality of processing modules included in the prototype structure of the multitask structure, the processing ends.

Further, in a case where it is considered in step S134 that the input-output similarity is lower than the predetermined threshold and there is no similarity, or in a case where it is considered in step S139 that the score relating to the processing accuracy is lower than the predetermined threshold and the newly constructed multitask structure is not optimized, the processing proceeds to step S144. That is, in this case, the optimization processing is not substantially performed.

Further, in step S140, in the case where relearning is not required, the learning determination unit 78 stores the reconfigured multitask structure in the multitask structure storage unit 80 in association with the score.

Through the above-described processing, as described above, in the multitask structure storage unit 80, similar processing modules or blocks having an input-output similarity higher than a predetermined threshold are reconfigured by sharing the processing modules or blocks, sharing parameters, or coupling or decoupling as needed, and thereby the structure is stored as a newly constructed optimized multitask structure together with scores.

In other words, in the multitask structure storage unit 80, a newly constructed optimized multitask structure containing processing modules or blocks reconfigured by sharing the processing modules or blocks performing similar or identical processing, sharing parameters, or coupling or decoupling is stored together with the scores.

That is, when a new multitasking structure is built, a prototype structure of the multitasking structure is generated based on an existing single task. In addition, similar processing modules and blocks are searched for according to input-output similarities of the processing modules and blocks between the single task structures, parameters are shared, the processing modules and blocks are coupled or decoupled for optimization, and a new multi-task structure is constructed and stored with the scores.

< search processing of parameter similarity comparison Structure >

Next, the parameter similarity comparison structure search process will be described with reference to the flowchart of fig. 12. Note that the processing of steps S161, S162, and S166 to S174 in the flowchart of fig. 12 is similar to the processing of steps S131, S132, and S136 to S144 in the flowchart of fig. 11, and thus the description thereof is omitted.

That is, by the processing of steps S161 and S162, necessary existing single task structures are read from, for example, the storage unit 34 in accordance with the multitask to be executed by the target device 12, and a prototype structure of the multitask structure is constructed. Then, an unprocessed combination among combinations among the processing modules included in the single task structure or among the blocks including a plurality of processing modules included in the prototype structure of the multitask structure is set as a processing target combination.

Then, in step S163, the similarity comparison unit 72 calculates the parameter similarity of the combination between the processing modules as the processing target combination or between the blocks containing a plurality of processing modules.

In step S164, the similarity comparison unit 72 determines whether the parameter similarity is higher than a predetermined threshold.

In step S164, in the case where the parameter similarity is higher than the predetermined threshold, the similarity comparison unit 72 outputs the parameter similarity to the coupling and decoupling unit 73 so as to optimize multitasking.

In step S165, the coupling and decoupling unit 73 couples or decouples the processing modules or blocks as necessary to share the parameters in the processing modules or blocks having high parameter similarity between the single task structures, thereby constructing a new multitask structure.

The coupling and decoupling unit 73 then outputs the newly constructed multitask structure to the simulation unit 74.

Then, through the processing of steps S166 to S174, as described above, the structure is optimized by the reconfiguration of coupling or decoupling so that the parameters are shared by similar processing modules or blocks having parameter similarity higher than the predetermined threshold, and a new multitask structure is constructed and stored in the multitask structure storage unit 80 together with the score.

In other words, in the multitask structure storage unit 80, the processing modules and blocks are reconfigured and optimized by coupling or decoupling so that the processing modules and blocks share parameters, and the newly constructed multitask structure is stored together with the scores.

The multitask structure output unit 81 outputs the newly constructed multitask structure higher than a predetermined score among the newly constructed multitask structures, which is optimized by coupling or decoupling the reconfiguration processing modules and blocks so that the processing modules and blocks are shared or parameters are shared, to the target device 12 to perform multitasking, and the newly constructed multitask structure is stored in the multitask structure storage unit 80.

That is, when a new multitasking structure is built, a prototype structure of the multitasking structure is generated based on an existing single task. And searching similar processing modules and blocks according to the input and output similarity or parameter similarity of the processing modules and blocks among the single task structures, sharing parameters, sharing the processing modules and blocks, coupling or decoupling the processing modules and blocks for optimization, and constructing a new optimized multi-task structure and storing the new optimized multi-task structure together with the scores.

Then, through the processing in step S34 described with reference to fig. 6, the multitask structure output unit 81 outputs the NN having a score higher than the predetermined score in the optimized multitask structure to the target device 12. Thus, the target device 12 may implement multitasking optimized by the NN with an optimized multitasking structure.

Note that, in the above description, an example is described in which, when the optimized multitask structure is stored and output together with the score, a multitask structure higher than a predetermined score is output to the target device 12. However, only the multitasking structure with the high score may be stored, and the stored multitasking structure may be output to the target device 12.

Through the processes as described above, for example, in the case of implementing multitasking by a newly developed target device 12 with improved performance, by using a single task structure used in an existing apparatus, it is possible to share, couple and decouple processing modules and blocks, shared parameters or reset parameters according to the performance and functions of the target device 12 with improved performance by relearning using learning data, and it is possible to easily develop NN as a multitasking structure.

Further, in the above description, an example has been described in which, according to the target multitask, first, a prototype structure of a multitask structure is configured by combining existing single task structures, and similar processing modules and blocks are shared, coupled, and decoupled based on input-output similarities and parameter similarities of the processing modules and blocks included in the single task structure, parameters are shared by the similar processing modules and blocks, or parameters are reset by relearning using learning data.

However, in the case where the existing multitasking structure is reused for the new target device 12, the existing multitasking structure is regarded as a prototype structure of the multitasking structure, and by being optimized in a similar manner to the above-described manner according to the performance and function of the new target device 12, the existing multitasking structure can be optimized and reused for the new target device 12, and development costs can be reduced.

<3. application example 1>

< prototype Structure search processing with improved search efficiency >

In the above description, an example has been described in which a combination between all the processing modules of a single task structure contained in a prototype structure of a multitask structure or a combination between blocks containing a plurality of processing modules is set as a combination to be processed, and a combination having an input-output similarity or a parameter similarity higher than a predetermined threshold is searched for.

However, some of the single task structures contained in the prototype structure of the multitasking structure contain processing modules and blocks with low similarity to other single task structures.

Therefore, in the single task structure included in the prototype structure of the multitask structure, those processing modules or blocks including a low degree of similarity with other single task structures can be excluded from the processing targets by the statistical processing, and the prototype structure search processing can be performed.

Through the above processing, a single task structure including a processing module or block with low input/output similarity or parameter similarity is excluded from the single task structures included in the prototype structure, so that the prototype structure search processing can be performed only on the single task structure including the processing module or block with high input/output similarity or parameter similarity, and the search efficiency of the prototype structure search processing can be improved.

Next, a prototype-structure search process with improved search efficiency will be described with reference to the flowchart of fig. 13. Note that the processing of steps S208, S209 in the flowchart of fig. 13 is similar to the processing of steps S111, S112 described with reference to the flowchart of fig. 9, and thus the description thereof will be omitted.

In step S201, the single task structure obtaining unit 71 reads a necessary existing single task structure from, for example, the storage unit 34, and constructs a prototype structure of the multi-task structure, according to the multi-task to be executed by the target device 12.

In step S202, the similarity comparing unit 72 sets as a processing target combination an unprocessed combination among combinations between processing modules included in a single task structure included in the prototype structure of the multitask structure or among combinations between blocks including a plurality of processing modules.

In step S203, the similarity comparison unit 72 calculates the input-output similarity of a combination between the processing modules as the processing target combination or between blocks containing a plurality of processing modules.

In step S204, the similarity comparison unit 72 compares the input-output similarity with a predetermined threshold.

In step S205, the similarity comparison unit 72 stores the comparison result between the input-output similarity and a predetermined threshold.

In step S206, the similarity comparing unit 72 determines whether there is an unprocessed combination among combinations among processing modules included in a single task structure included in the prototype structure of the multitask structure or among blocks including a plurality of processing modules.

In step S206, if it is determined that an unprocessed combination exists among the combinations of the processing modules included in the single task structure included in the prototype structure of the multitask structure or among the blocks including a plurality of processing modules, the process returns to step S202.

That is, the input-output similarities of all combinations between the processing modules included in the single task structure or between the blocks including a plurality of processing modules included in the prototype structure of the multitask structure are compared with the threshold value, and in the case where it is determined that there is no unprocessed combination, the processing proceeds to step S207.

In step S207, the similarity comparison unit 72 statistically processes the comparison result between the input-output similarity in the combination between the processing modules included in the single task structure included in the prototype structure of the multitask structure or between the blocks including a plurality of processing modules and a predetermined threshold value. Then, the similarity degree comparison unit 72 excludes another single task structure and a processing module or a single task structure of an inclusion block having a low input-output similarity degree from the processing targets in the prototype structure search processing.

Then, in steps S208 and S209, the input-output similarity comparison structure search processing and the parameter similarity comparison structure search processing are executed.

As a result of this processing, in the single task structure included in the prototype structure of the multitask structure, the single task structure having a low input/output similarity between the processing modules or between the modules including a plurality of processing modules is excluded from other single task structures, and the processing is optimized by sharing parameters, sharing the processing modules or modules, and coupling or decoupling the processing modules or modules including a plurality of processing modules, so that the search processing efficiency of the prototype structure search processing can be improved.

Note that in the prototype-structure search processing described with reference to the flowchart of fig. 13, an example in which a single-task structure including processing modules and blocks having a low input-output similarity is excluded from the processing targets of the prototype-structure search processing has been described. However, a single task structure having a low degree of parameter similarity between processing modules or blocks with another single task structure may be excluded from the targets of the prototype structure search process. Furthermore, single-tasking structures with both low input similarity and low parameter similarity may be excluded from the goal of the prototype structure search process.

<4. application example 2>

In the above, as described with reference to fig. 6, an example has been described in which a multitask structure found by the prototype structure search processing is supplied to the target device 12 by the multitask structure generating unit 11 including a PC that uses a sensor corresponding to the target device 12 or learning data acquired from the target device 12 to implement multitasking.

However, as shown in the flowchart of fig. 14, the target device 12 may independently implement multitasking.

In this case, the target device 12 functions as the multitask structure generating unit 11.

That is, in step S231, the target device 12 acquires (reads) learning data of one shot or several shots of sensors having configurations corresponding to various sensors included in the target device 12.

In step S232, the target device 12 serving as the multitask structure generating unit 11 constructs a prototype structure of the multitask structure by combining the single task structures required for the multitask structure.

In step S233, the target device 12 serving as the multitask structure generating unit 11 executes the prototype structure search process that has been described with reference to the flowchart of fig. 9, searches for similar processing modules, and optimizes the multitask structure through processes such as parameter sharing, processing module sharing, and coupling or decoupling of the processing modules, thereby achieving higher processing efficiency and constructing a complete multitask structure.

Through the above-described processing, a multitask structure can be generated only by the target device 12 serving as the multitask structure generating unit 11.

Thus, for example, in the case where a multitask structure necessary for operation on earth is built in the target device 12 as an existing multitask structure, the target device 12 can reconstruct the multitask structure itself through the above-described processing.

Thus, for example, when the target device 12 recognizes that the target device 12 has reached the lunar surface, a multitask structure suitable for the lunar surface may be reconstructed by acquiring learning data necessary to operate on the lunar surface, relearning the multitask structure, and updating the parameters.

Thus, the target device 12 can repeatedly and autonomously reconstruct an optimal multitask structure in the current environment in real time while adapting to the current environment, and can realize multitasking while adapting to various environments.

<5. application example 3>

In the above, as described with reference to fig. 14, an example has been described in which multitasking is realized by an optimized multitasking structure by performing a prototype structure search process using learning data of a sensor of the target device 12 itself, the target device 12 serving as the multitasking structure generation unit 11.

However, as shown in the flowchart of fig. 15, a cloud computer may be used as the multitask structure generating unit 11 to implement multitasking.

That is, in step S251, the target device 12 acquires learning data of one shot or several shots of sensors having configurations corresponding to various sensors included in the target device 12.

In step S252, the target device 12 transmits learning data of one shot or several shots of sensors having configurations corresponding to various sensors included in the target device 12 to the cloud computer 101.

In step S271, the cloud computer 101 serving as the multitask structure generating unit 11 acquires learning data of one shot or several shots.

In step S272, the cloud computer 101 serving as the multitask structure generating unit 11 combines the single task structures required for the multitask structure to construct a prototype structure of the multitask structure.

In step S273, the cloud computer 101 serving as the multitask structure generating unit 11 executes the prototype structure search process, search for similar processing modules, which have been described with reference to the flowchart of fig. 9, and optimizes the multitask structure through processes such as parameter sharing, processing module sharing, and coupling or decoupling of the processing modules, thereby achieving higher processing efficiency and constructing a complete multitask structure.

In step S274, the cloud computer 101 serving as the multitask structure generating unit 11 provides the complete multitask structure to the target device 12.

In step S253, the target device 12 acquires the multitask structure supplied from the cloud computer 101 serving as the multitask structure generating unit 11, and executes multitask.

In step S254, the target device 12 acquires the multitask structure supplied from the cloud computer 101 serving as the multitask structure generating unit 11, and executes multitask.

Through the above-described processing, the prototype structure search processing is performed by the cloud computer 101 to construct a multitask structure of an optimized state, thereby achieving efficient multitasking of the target device 12.

Therefore, the overhead of the memory and the processing load can be reduced, and the processing load and the processing speed can be increased by sharing the parameters of the processing module and the processing module itself.

Further, the target device 12 may implement multitasking based on a multitasking structure by communicating with the cloud computer 101.

As a result, a multitasking structure containing optimized processing modules and blocks can be easily constructed.

<6. software execution example >

Fig. 16 illustrates a configuration example of a general-purpose computer. The personal computer has a built-in Central Processing Unit (CPU) 1001. An input and output interface 1005 is connected to the CPU1001 via a bus 1004. A Read Only Memory (ROM)1002 and a Random Access Memory (RAM)1003 are connected to the bus 1004.

The input and output interface 1005 is connected to an input unit 1006 including an input device for inputting an operation command by a user such as a keyboard or a mouse, an output unit 1007 that outputs an image of a processing operation screen or a processing result to a display device, a storage unit 1008 including a hard disk drive or the like for storing a program and various data, and a communication unit 1009 including a Local Area Network (LAN) adapter or the like and performing communication processing via a network typified by the internet. Further, the input and output interface 1005 is connected to a drive 1010 that reads and writes data to a removable storage medium 1011 such as a magnetic disk (including a flexible disk), an optical disk (including a compact disk-read only memory (CD-ROM) and a Digital Versatile Disk (DVD)), a magneto-optical disk (including a Mini Disk (MD)), or a semiconductor memory.

The CPU 1001 reads out a program stored in the ROM 1002 or a removable storage medium 1011 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory, is installed in the storage unit 1008, and executes various processes in accordance with the program loaded from the storage unit 1008 into the RAM 1003. Further, the RAM 1003 also appropriately stores data and the like necessary for the CPU 1001 to execute various processes.

In the computer configured as described above, for example, the CPU 1001 loads a program stored in the storage unit 1008 into the RAM 1003 via the input and output interface 1005 and the bus 1004 and executes the program, thereby executing the series of processes described above.

The program executed by the computer (CPU 1001) can be provided by being recorded on a removable storage medium 1011 as a package medium or the like, for example. Further, the program may be provided via a wired or wireless transmission medium such as a local area network, the internet, or digital satellite broadcasting.

In the computer, by mounting the removable storage medium 1011 to the drive 1010, a program can be installed in the storage unit 1008 via the input and output interface 1005. Further, the program may be received by the communication unit 1009 via a wired or wireless transmission medium and installed in the storage unit 1008. In addition, the program may be installed in the ROM 1002 or the storage unit 1008 in advance.

Note that the program executed by the computer may be a program that is processed chronologically according to the order described in this specification, or may be a program that is processed in parallel or at necessary timing (such as when a call is made).

Note that the CPU 1001 in fig. 16 realizes the function of the control unit 31 in fig. 7.

Further, in this specification, a system refers to a set of a plurality of constituent elements (a device, a module (a component), and the like), and it does not matter whether all the constituent elements are in the same housing. Therefore, both a plurality of devices accommodated in separate housings and connected via a network and one device in which a plurality of modules are accommodated in one housing are systems.

Note that the embodiments of the present disclosure are not limited to the above-described embodiments, and various modifications are possible without departing from the gist of the present disclosure.

For example, in the present disclosure, a configuration of cloud computing may be adopted in which one function is shared by a plurality of apparatuses via a network and is cooperatively processed.

Further, each step described in the above-described flowcharts may be executed by one apparatus or shared by a plurality of apparatuses.

Further, in the case where a plurality of processes are included in one step, the plurality of processes included in the one step may be executed by one apparatus or shared and executed by a plurality of apparatuses.

Note that the present disclosure may adopt the following configuration.

<1> an information processing apparatus comprising:

an optimization unit that optimizes the multitask structure based on a structure of the multitask structure that controls an operation of the target device.

<2> the information processing apparatus according to <1>,

wherein the multitasking structure comprises a plurality of existing single task structures.

<3> the information processing apparatus according to <2>,

wherein each single task structure comprises a processing module or a block comprising a plurality of processing modules, an

The optimization unit optimizes the multi-tasking structure using processing modules or blocks having similarities between the single tasking structures.

<4> the information processing apparatus according to <3>,

wherein the optimization unit optimizes the multitasking structure by using processing modules or blocks having a similarity between single-tasking structures indicating that the similarity is higher than a predetermined value.

<5> the information processing apparatus according to <4>,

wherein the optimization unit optimizes the multitask structure by using a processing module or block having an input-output similarity indicating that the similarity of input-output data is high between single-task structures.

<6> the information processing apparatus according to <4>,

wherein the optimization unit optimizes the multitask structure by using processing modules or blocks between the single task structures having a parameter similarity indicating that a similarity of the used parameters is higher than a predetermined value.

<7> the information processing apparatus according to <3>,

wherein the optimization unit optimizes the multitask structure by sharing processing modules or blocks between the single task structures having a similarity indicating that the similarity is higher than a predetermined value.

<8> the information processing apparatus according to <7>,

wherein the optimization unit optimizes the multitask structure by coupling or decoupling the processing modules or blocks having a similarity higher than a predetermined value between the single task structures to share the processing modules or blocks.

<9> the information processing apparatus according to <8>,

wherein the optimization unit optimizes the multitasking structure by coupling or decoupling the processing modules or blocks having a similarity higher than a predetermined value between the single tasking structures to share the processing modules or blocks according to the performance and function of hardware included in the target device.

<10> the information processing apparatus according to <3>,

wherein the optimization unit optimizes the multitask structure by sharing a parameter used in the processing module or block having a similarity indicating that the similarity is higher than a predetermined value between the single task structures.

<11> the information processing apparatus according to any one of <1> to <10>, further comprising:

a simulation unit that performs a simulation in a case where the multitask structure optimized by the optimization unit is operated by the target device;

A score calculating unit that calculates a score indicating a processing accuracy of multitask realized by simulating a multitask-based structure; and

a relearning unit to relearn the optimized multitasking structure based on the score.

<12> the information processing apparatus according to <11>,

wherein the relearning unit updates at least one of the processing module included in the optimized multitask structure and the parameter of the block including the plurality of processing modules by relearning using the learning data.

<13> the information processing apparatus according to <12>,

wherein the learning data is data according to performance and functions of a sensor and a camera provided in the target device.

<14> the information processing apparatus according to <13>,

wherein the processing module and block perform predetermined processing based on at least one of a sensing result supplied from the sensor, an image supplied from the camera, and a processing result of the processing module or block, and output the processing result.

<15> the information processing apparatus according to <14>,

among them, the multitasking structure is a program using a neural network that controls the operation of the target device.

<16> the information processing apparatus according to <15>,

wherein each of the processing modules and blocks is a layer or block in a program using a neural network.

<17> the information processing apparatus according to any one of <1> to <16>,

wherein the information processing apparatus is a target device.

<18> the information processing apparatus according to any one of <1> to <16>,

wherein the information processing apparatus is a cloud computer.

<19> an information processing method comprising:

the optimization processing of the multitask structure is optimized based on the structure of the multitask structure that controls the operation of the target device.

<20> a program for causing a computer to function as follows

An optimization unit that optimizes the multitask structure based on a structure of the multitask structure that controls an operation of the target device.

REFERENCE SIGNS LIST

11 multitask structure generating unit

12 target device

31 control unit

32 input unit

33 output unit

34 memory cell

35 communication unit

36 driver

37 removable storage medium

51 structure search unit

71 Single-task structure acquisition unit

72 similarity comparison unit

73 coupling and decoupling unit

74 analog unit

75 accuracy check unit

76 score calculating unit

77 score determining unit

78 learning determination unit

79 learning unit

80 multitasking structure memory cell

81 multitask structure output unit.

37页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:基于从机器学习模型得出的操作向机群中的设备创建和部署包

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!