Image processing method and system based on service robot cloud platform

文档序号:191564 发布日期:2021-11-02 浏览:3次 中文

阅读说明:本技术 基于服务机器人云平台的图像处理方法及系统 (Image processing method and system based on service robot cloud platform ) 是由 周风余 郝涛 尹磊 于 2021-08-11 设计创作,主要内容包括:本发明属于服务机器人图像处理领域,提供了一种基于服务机器人云平台的图像处理方法及系统。其中,该方法包括获取待分类图像;将待分类图像经所述服务机器人云平台内优化完成的图像分类网络模型处理后,得到图像分类结果;所述图像分类网络模型的优化过程为:基于图像样本集合计算图像分类网络模型梯度,并将梯度归一化;归一化梯度经元优化器系统处理得到设定数量的候选更新;利用Look-Ahead算法将设定数量的候选更新融合为最终更新;利用最终更优化练图像分类网络模型的参数,并存储至服务机器人云平台内。(The invention belongs to the field of image processing of service robots, and provides an image processing method and system based on a cloud platform of a service robot. The method comprises the steps of obtaining an image to be classified; processing the image to be classified by the optimized image classification network model in the service robot cloud platform to obtain an image classification result; the optimization process of the image classification network model comprises the following steps: calculating the gradient of the image classification network model based on the image sample set, and normalizing the gradient; processing the normalized gradient by a meta-optimizer system to obtain a set number of candidate updates; fusing the set number of candidate updates into a final update by using a Look-Ahead algorithm; and utilizing the parameters of the final more optimized training image classification network model and storing the parameters into the service robot cloud platform.)

1. An image processing method based on a service robot cloud platform is characterized by comprising the following steps:

acquiring an image to be classified;

processing the image to be classified by the optimized image classification network model in the service robot cloud platform to obtain an image classification result;

the optimization process of the image classification network model comprises the following steps:

calculating the gradient of the image classification network model based on the image sample set, and normalizing the gradient;

processing the normalized gradient by a meta-optimizer system to obtain a set number of candidate updates;

fusing the set number of candidate updates into a final update by using a Look-Ahead algorithm;

and optimizing parameters of the image classification network model by utilizing the final update, and storing the parameters into the service robot cloud platform.

2. The service robot cloud platform-based image processing method of claim 1, wherein the meta-optimizer system is trained in a meta-learning manner in advance.

3. The service robot cloud platform-based image processing method of claim 1, wherein the meta optimizer system is composed of several meta optimizers.

4. The service robot cloud platform based image processing method of claim 3, wherein the meta optimizer is a two-layer LSTM network.

5. The service robot cloud platform-based image processing method of claim 1, wherein an optimizer training a meta-optimizer in the meta-optimizer system is Adam.

6. An image processing system based on a service robot cloud platform, comprising:

the image acquisition module is used for acquiring an image to be classified;

the image classification module is used for processing the image to be classified through an optimized image classification network model in the service robot cloud platform to obtain an image classification result;

the optimization process of the image classification network model comprises the following steps:

calculating the gradient of the image classification network model based on the image sample set, and normalizing the gradient;

processing the normalized gradient by a meta-optimizer system to obtain a set number of candidate updates;

fusing the set number of candidate updates into a final update by using a Look-Ahead algorithm;

and optimizing the image classification network model by utilizing the final update, and storing the image classification network model into the service robot cloud platform.

7. The service robot cloud platform-based image processing system of claim 6, wherein the meta-optimizer system is pre-trained in a meta-learning manner.

8. The service robot cloud platform based image processing system of claim 6, wherein the meta optimizer system is comprised of a number of meta optimizers.

9. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps in the service robot cloud platform based image processing method according to any one of claims 1 to 5.

10. A service robot based on cloud computing platform, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program implements the steps in the image processing method based on cloud platform of service robot as claimed in any one of claims 1-5.

Technical Field

The invention belongs to the field of image processing of service robots, and particularly relates to an image processing method and system based on a cloud platform of a service robot.

Background

The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art.

Deep learning has been successful in image processing and other fields, and is applied more and more widely, becoming one of the most popular technologies at present. However, training of deep neural networks still faces a number of challenges. Currently, the optimizers in widespread use are all designed manually, for example, SGD, RMSprop, AdaGrad, and Adam. Vision is a main information source of the service robot, and a neural network for image processing is widely applied to the service robot, so that the image processing precision is crucial to the performance and the use experience of the service robot,

the inventor finds that when the manually designed optimizers are used for training the neural network, the problems of low convergence speed, low convergence precision and the like are often faced, a great deal of time and energy is needed for adjusting the super-parameters such as the learning rate and the like, a great deal of model training experience is needed, time and labor are consumed, and the convergence speed and the precision of a processing result during the image processing network training are finally influenced.

Disclosure of Invention

In order to solve the technical problems in the background art, the invention provides an image processing method and system based on a cloud platform of a service robot, which can improve the convergence speed during image processing network training, the final precision of an image processing network and the image processing capability of the service robot.

In order to achieve the purpose, the invention adopts the following technical scheme:

the invention provides an image processing method based on a service robot cloud platform.

An image processing method of a service robot based on a cloud computing platform, comprising:

the service robot acquires an image to be classified;

processing the image to be classified by the optimized depth image classification network model in the service robot cloud platform to obtain an image classification result;

the optimization process of the image classification network model comprises the following steps:

calculating the gradient of the image classification network model based on the image sample set, and normalizing the gradient;

processing the normalized gradient by a meta-optimizer system to obtain a set number of candidate updates;

fusing the set number of candidate updates into a final update by using a Look-Ahead algorithm;

and finally updating the parameters of the optimized image classification network model, and storing the parameters into the service robot cloud platform.

Further, the meta-optimizer system is obtained by training in a meta-learning manner in advance.

Further, the meta-optimizer system is composed of several meta-optimizers.

Further, the meta-optimizer is a two-layer LSTM network.

Further, Adam is used to train the optimizer in the meta-optimizer system.

A second aspect of the present invention provides an image processing system based on a service robot cloud platform.

An image processing system based on a service robot cloud platform, comprising:

the image acquisition module is used for acquiring an image to be classified;

the image classification module is used for processing the image to be classified through an optimized image classification network model in the service robot cloud platform to obtain an image classification result;

the optimization process of the image classification network model comprises the following steps:

calculating the gradient of the image classification network model based on the image sample set, and normalizing the gradient;

processing the normalized gradient by a meta-optimizer system to obtain a set number of candidate updates;

fusing the set number of candidate updates into a final update by using a Look-Ahead algorithm;

and optimizing parameters of the image classification network model by utilizing the final update, and storing the parameters into the service robot cloud platform.

A third aspect of the invention provides a computer-readable storage medium.

A computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps in the service robot cloud platform based image processing method as described above.

A fourth aspect of the present invention provides a service robot based on a cloud computing platform.

A service robot based on a cloud computing platform comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein the processor executes the program to realize the steps in the image processing method based on the cloud computing platform of the service robot.

Compared with the prior art, the invention has the beneficial effects that:

calculating the gradient of an image classification network model based on an image sample set, and normalizing the gradient; processing the normalized gradient by a meta-optimizer system to obtain a set number of candidate updates; fusing the set number of candidate updates into a final update by using a Look-Ahead algorithm; parameters of the image classification network model are optimized by utilizing final updating and are stored in the cloud platform of the service robot, so that the convergence speed of the image classification network model during training is effectively increased, the final loss is reduced, the precision of the image classification network model on an image processing result is improved, and the image processing capacity of the service robot is improved.

Advantages of additional aspects of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.

Drawings

The accompanying drawings, which are incorporated in and constitute a part of this specification, are included to provide a further understanding of the invention, and are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the invention and together with the description serve to explain the invention and not to limit the invention.

FIG. 1 is a flowchart of an image processing method based on a service robot cloud platform according to an embodiment of the present invention;

FIG. 2 is a block diagram of a meta optimizer of an embodiment of the present invention;

FIG. 3 is a meta-optimizer computational graph of an embodiment of the present invention;

FIG. 4 is a learning rate variation curve under the periodic cosine annealing strategy according to the embodiment of the present invention;

FIG. 5 is a flow chart of meta-optimizer use in an embodiment of the present invention.

Detailed Description

The invention is further described with reference to the following figures and examples.

It is to be understood that the following detailed description is exemplary and is intended to provide further explanation of the invention as claimed. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.

It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.

Example one

As shown in fig. 1, the embodiment provides an image processing method based on a service robot cloud platform, which specifically includes the following steps:

step S101: and acquiring an image to be classified.

Step S102: and processing the image to be classified by the optimized image classification network model in the service robot cloud platform to obtain an image classification result.

In step S102, the optimization process of the image classification network model includes:

calculating the gradient of the image classification network model based on the image sample set, and normalizing the gradient;

processing the normalized gradient by a meta-optimizer system to obtain a set number of candidate updates;

fusing the set number of candidate updates into a final update by using a Look-Ahead algorithm;

and finally, optimizing parameters of the training image classification network model, and storing the parameters into a service robot cloud platform.

A block diagram for optimizing a neural network with a meta-optimizer system is shown in fig. 2. A N-scale meta-optimizer system includes N meta-optimizersIn the optimization process of the t step, each meta-optimizer takes the normalized gradient as input, and then, each meta-optimizer G (phi)i) According to its own parameter phiiAnd historical gradient informationOutput ofAs a candidate update. Finally, the Look-Ahead algorithm updates the N candidatesCombined into a final output

Wherein, { G (φ)i) Is the ith optimizer in the optimizer system, phiiIs a parameter thereof;the output of the ith optimizer at step t is also referred to as a candidate update.

In this embodiment, the meta-optimizer G is embodied using a two-layer LSTM network, which has great advantage in such tasks because the LSTM network can combine historical gradient information to produce a current output result.

Stage one: training meta-optimizer system using meta-learning approach

Let D ═ D for the datasettrain,DtestOn the data set DThe artificial neural network f (theta) can be any deep neural network structure, and can be simply constructed into a multilayer perceptron MLP in order to reduce the calculation amount. Then, a two-layer LSTM network is constructed as the meta-optimizer G with a parameter of phi, where the hidden size of each layer of LSTM is 20. Defining the loss function of the meta-optimizer G as

Where M is the total number of updates, ctThe weight of each step is > 0, in this embodiment, M is 100, and for any t, ct1. Under this penalty function, the computational graph of the meta-optimizer G (φ) is shown in FIG. 3. During training, gradients propagating back along the dashed line are discarded, which can avoid calculating a second order gradient for loss.

The optimizer for training G (φ) is Adam, and its learning rate is annealed as in equation (2).

Wherein alpha ismaxAnd alphaminRepresenting the variation range of the learning rate, tcurIs the number of iterations since the last training restart. Alpha is alphaminIs a set small value, and the present embodiment is set to αmin=10-5,αmax0.01. T is a hyper-parameter that determines the annealing cycle. t is tcurAnd T are both integers. When a cycle starts, tcur=0,α0=αmax. After T iterations, the learning rate becomes the minimum value alphaminAt this point, one cycle ends. At the beginning of the next cycle, tcurBecomes 0 again, the value of the learning rate becomes αmax. When the learning rate becomes set small, the trained model tends to converge to the nearest local minimum. Based on this finding, when the learning rate decays to αminThe model parameters may be saved at that time.FIG. 4 shows that when αmaxA change curve of the learning rate α when T is 750, and N is 4, 0.01.

When training the meta-optimizer G (phi), the input is the gradient of the neural network f (theta), and the difference in the gradient scale at different coordinates is very large, which makes it very difficult to train the meta-optimizer. Therefore, it is necessary to normalize the input gradient. The normalization method adopted by the embodiment is as follows:

in the present embodiment, p is set to 10. p is a constant coefficient.

After all the above initialization is completed, the G can be trained in the following way to obtain the meta-optimizer containing N different parametersA meta-optimizer system is constructed.

The training mode is as follows:

the method comprises the following steps: from DtrainRandomly acquiring B samples as a batch of training data DB

Step two: will DBThe input f (theta) calculates the loss l, and then the gradient of f (theta) is calculated by back propagation

Step three: will be provided withAnd (3) carrying out normalization processing by using a formula (3), and inputting G (phi) to obtain an output G.

Step four: update the parameter θ of f (θ) using θ ← θ + g.

Step five: repeating the steps one to four 100 times. At this time, the training diagram shown in FIG. 2 is formed.

Step six: updating the parameter φ of G (φ) with an Adam optimizer based on the computation graph, i.e.

Step seven: repeating the steps from the first step to the sixth step for 3000 times whenThe parameter phi at this time is saved.

Step eight: repeating the steps one to seven 4 times to obtain the meta-optimizer system

And a second stage: training other neural networks on other datasets using a trained meta-optimizer system

For example: when N is 4, the trained meta-optimizer system can be utilized after stage one is completedOther neural networks are trained. A flow chart of this process is shown in fig. 5.

The Look-Ahead algorithm is described as follows:

to update candidates of N meta-optimizer outputs in a meta-optimizer systemMerging into final updates(t represents the updating of the t-th step), the present embodiment proposes a Look-Ahead fusion algorithm. The method can enable the meta-optimizer system to achieve faster convergence speed and lower loss. Let f be the model being trained and θ be the parameter. In the optimization process of the t step, firstly, the normalized network gradient is input into a meta-optimizer system comprising N meta-optimizers, and then N candidate updates are obtainedThereafter, the training is collectedAnother mini-batch data is collected and the following N iterative formulas are calculated:

wherein β ∈ [0,1) is the momentum coefficient.Is a real number, and means thatConsider the update of the t-th step to approximate the loss of f at time t + 1. Finally, Look-Ahead calculates the final update at step t by the following formula

Look-Ahead is based only on when β is 0The final update is decided, i.e. only the loss change information at the current time is referred to. However, when the loss is calculated, the results obtained by the data of different mini-batchs are different greatly and have great randomness. Therefore, to eliminate this randomness, a momentum term is added, which corresponds to the case where β ≠ 0 in equation (4). This embodiment sets β to 0.1.

The detailed algorithm corresponding to fig. 5 is shown in algorithm 1.

For example: training a meta-optimizer system on MNIST datasetsAnd useTo optimize neural networks on the fast-MNIST dataset

Firstly, a neural network f (theta) is constructed on an MNIST data set by a training element optimizer system to be a three-layer perceptron network, the size of a hidden layer is 20, and an activation function is ReLU. The training element optimizer system comprises the following steps:

the method comprises the following steps: randomly acquiring 128 samples from MNIST data set as a batch of training data DB

Step two: will DBThe input f (theta) calculates the loss l, and then the gradient of f (theta) is calculated by back propagation

Step three: will be provided withAnd (3) carrying out normalization processing by using a formula (3), and inputting G (phi) to obtain an output G.

Step four: update the parameter θ of f (θ) using θ ← θ + g.

Step five: repeating the steps one to four 100 times. At this time, the training diagram shown in FIG. 2 is formed.

Step six: updating the parameter φ of G (φ) with an Adam optimizer based on the computation graph, i.e.

Step seven: repeating steps one to six 3000Then, whenThe parameter phi at this time is saved.

Step eight: repeating the steps one to seven 4 times to obtain the meta-optimizer system

The resulting meta-optimizers are then appliedTraining neural networks on the fast-MNIST datasetThe specific process is as follows:

the method comprises the following steps: 128 training samples were collected from the fast-MNIST dataset as a batch of training data DB

Step two: will DBInput deviceCalculating the loss l, then back-propagating the calculationGradient of (2)

Step three: will be provided withUsing formula (3) as normalization processing and inputtingGet output 4 candidate updates

Step four: by usingLook-Ahead algorithm decides the final update

Step five: by usingUpdatingParameter (d) of

Step six: repeating the steps one to five 100 times to obtain the network with excellent performance in the fast-MNIST data set

Example two

The embodiment provides an image processing system based on a service robot cloud platform, which specifically comprises the following modules:

the image acquisition module is used for acquiring an image to be classified;

the image classification module is used for processing the image to be classified through an optimized image classification network model in the service robot cloud platform to obtain an image classification result;

the optimization process of the image classification network model comprises the following steps:

calculating the gradient of the image classification network model based on the image sample set, and normalizing the gradient;

processing the normalized gradient by a meta-optimizer system to obtain a set number of candidate updates;

fusing the set number of candidate updates into a final update by using a Look-Ahead algorithm;

and optimizing parameters of the image classification network model by utilizing the final update, and storing the parameters into the service robot cloud platform.

It should be noted that, each module of the present embodiment corresponds to each step of the first embodiment one to one, and the specific implementation process is the same, which will not be described herein again.

EXAMPLE III

The present embodiment provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps in the image processing method based on the service robot cloud platform as described in the first embodiment above.

Example four

The embodiment provides a service robot based on a cloud computing platform, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the program to implement the steps in the image processing method of the service robot based on the cloud computing platform according to the first embodiment.

As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.

The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.

It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.

The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

11页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种基于训练式的人脸识别智能保险柜应用方法及系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!