Image processing method and training method, device, equipment and medium of model thereof

文档序号:105805 发布日期:2021-10-15 浏览:17次 中文

阅读说明:本技术 图像处理方法及其模型的训练方法和装置、设备、介质 (Image processing method and training method, device, equipment and medium of model thereof ) 是由 黄健文 秦梓鹏 黄展鹏 于 2021-06-30 设计创作,主要内容包括:本申请公开了一种图像处理方法及其模型的训练方法和装置、设备、介质,图像处理模型的训练方法包括:获取多张样本图像,其中,多张样本图像所属的图像类型为至少两种,样本图像对应有标注结果,样本图像的标注结果包括关于样本图像的内容的真实信息;分别利用图像处理模型对各样本图像进行处理,得到各样本图像的预测结果,其中,样本图像的预测结果包括关于样本图像的内容的预测信息;基于各样本图像的标注结果和预测结果,调整图像处理模型的参数。上述方案,通过使用多种图像类型的样本图像对模型进行训练,能够提高图像处理模型的适用性。(The application discloses an image processing method and a training method, device, equipment and medium of a model thereof, wherein the training method of the image processing model comprises the following steps: acquiring a plurality of sample images, wherein the image types of the plurality of sample images are at least two, the sample images correspond to an annotation result, and the annotation result of the sample images comprises real information about the content of the sample images; processing each sample image by using an image processing model respectively to obtain a prediction result of each sample image, wherein the prediction result of each sample image comprises prediction information about the content of each sample image; and adjusting parameters of the image processing model based on the labeling result and the prediction result of each sample image. According to the scheme, the model is trained by using the sample images of the multiple image types, so that the applicability of the image processing model can be improved.)

1. A method for training an image processing model, comprising:

acquiring a plurality of sample images, wherein the image types of the sample images are at least two, the sample images correspond to an annotation result, and the annotation result of the sample images comprises real information about the content of the sample images;

processing each sample image by using the image processing model respectively to obtain a prediction result of each sample image, wherein the prediction result of each sample image comprises prediction information about the content of each sample image;

and adjusting parameters of the image processing model based on the labeling result and the prediction result of each sample image.

2. The method of claim 1, wherein the image processing model comprises at least one of an object classification model and a saliency detection model;

in the case that the image processing model is the target classification model, the real information is a real category of a target in the sample image, and the prediction information includes a prediction category of the target in the sample image;

in a case where the image processing model is the saliency detection model, the true information is true position information about a saliency region in the sample image, and the prediction information includes predicted position information about a saliency region in the sample image.

3. The method according to claim 1 or 2, wherein the annotation information of the sample image further comprises a real image type of the sample image, and the prediction result of the sample image comprises a predicted image type of the sample image.

4. The method of claim 3, wherein adjusting parameters of the image processing model based on the labeling result and the prediction result of each of the sample images comprises:

obtaining a first loss based on the real information and the prediction information, and obtaining a second loss based on the real image type and the predicted image type;

adjusting parameters of the image processing model based on the first loss and the second loss.

5. The method of claim 4, wherein said adjusting parameters of said image processing model based on said first and second losses comprises:

obtaining a loss difference between the first loss and the second loss;

and adjusting parameters of the image processing model by using the loss difference and the second loss.

6. The method of claim 5, wherein the image processing model is a target classification model comprising a feature extraction network, a target classification network, and an image type classification network;

the processing each sample image by using the image processing model to obtain the prediction result of each sample image comprises:

carrying out feature extraction on the sample image by using the feature extraction network to obtain sample features;

performing target classification on the sample features by using the target classification network to obtain the prediction information of the sample image;

carrying out image type classification on the sample characteristics by using the image type classification network to obtain a predicted image type of the sample image;

the adjusting the parameters of the image processing model by using the loss difference and the second loss includes:

adjusting parameters of the image type classification network by using the second loss;

and adjusting parameters of the feature extraction network and the target classification network by using the loss difference.

7. The method according to any one of claims 1 to 6, wherein the processing each sample image by using the image processing model respectively to obtain a prediction result of each sample image, and adjusting the parameters of the image processing model based on the labeling result and the prediction result of each sample image comprises:

selecting a number of the sample images from the plurality of sample images as current sample images; wherein the image types to which the sample images belong comprise all image types of the sample images;

processing the current sample image by using the image processing model to obtain a prediction result in the current sample image;

adjusting parameters of the image processing model based on the labeling result and the prediction result of the current sample image;

repeating the steps of selecting a plurality of sample images from the plurality of sample images as the current sample image and the subsequent steps until the image processing model meets preset requirements.

8. The method of any one of claims 1 to 7, wherein the image types include one or more of images taken of the subject, hand-drawn images, cartoon images.

9. An image processing method, comprising:

acquiring an image to be processed;

processing the image to be processed by using an image processing model to obtain prediction information about the content of the image to be processed, wherein the image processing model is obtained by training according to the method of any one of claims 1 to 8.

10. The method of claim 9, wherein the image processing model comprises at least one of an object classification model and a saliency detection model;

under the condition that the image processing model is the target classification model, the prediction information is the prediction category of the target in the image to be processed;

in the case where the image processing model is the saliency detection model, the prediction information is predicted position information about a saliency region in the image to be processed.

11. The method according to claim 10, wherein in the case that the image processing model is the target classification model, after the image to be processed is processed by the image processing model to obtain the prediction information about the content of the image to be processed, the method further comprises at least one of:

displaying the prediction category on an interface displaying the image to be processed;

selecting the audio matched with the prediction category for playing;

selecting a source bone matched with the prediction category, and transferring first animation driving data related to the source bone to a target bone to obtain second animation driving data of the target bone, wherein the target bone is obtained by extracting bones based on a target in the image to be processed.

12. The method according to claim 10, wherein in a case that the image processing model is the saliency detection model, after the processing the image to be processed by using the image processing model to obtain the prediction information about the content of the image to be processed, the method further comprises:

utilizing the predicted position information to extract bones of the significant region to obtain target bones;

selecting a bone model for the target bone as a source bone;

migrating first animation driving data related to the source bone to the target bone to obtain second animation driving data of the target bone.

13. An apparatus for training an image processing model, comprising:

the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a plurality of sample images, the image types of the sample images are at least two, the sample images correspond to an annotation result, and the annotation result of the sample images comprises real information about the content of the sample images;

the first image processing module is used for processing each sample image by using the image processing model respectively to obtain a prediction result of each sample image, wherein the prediction result of each sample image comprises prediction information about the content of the sample image;

and the adjusting module is used for adjusting the parameters of the image processing model based on the labeling result and the prediction result of each sample image.

14. An image processing apparatus characterized by comprising:

the second acquisition module is used for acquiring an image to be processed;

a second image processing module, configured to process the image to be processed by using an image processing model to obtain prediction information about the content of the image to be processed, where the image processing model is trained by the method according to any one of claims 1 to 8.

15. An electronic device comprising a memory and a processor for executing program instructions stored in the memory to implement the method of any of claims 1 to 12.

16. A computer readable storage medium having stored thereon program instructions, which when executed by a processor implement the method of any of claims 1 to 12.

Technical Field

The present application relates to the field of image processing technologies, and in particular, to an image processing method and a training method, apparatus, device, and medium for a model thereof.

Background

Currently, with the progress of society, people put high demands on convenience in life and work. For example, in the field of image processing technology, the existing image processing model can only process images of a single image type, and once other types of models are input into the image processing model, the accuracy of image processing performed by the image processing model is relatively reduced, and the daily work requirement cannot be met.

Disclosure of Invention

The application at least provides an image processing method and a training method, device, equipment and medium of a model thereof.

The application provides a training method of an image processing model, which comprises the steps of obtaining a plurality of sample images, wherein the image types of the sample images are at least two, the sample images correspond to an annotation result, and the annotation result of the sample images comprises real information about the content of the sample images; processing each sample image by using an image processing model respectively to obtain a prediction result of each sample image, wherein the prediction result of each sample image comprises prediction information about the content of each sample image; and adjusting parameters of the image processing model based on the labeling result and the prediction result of each sample image.

Therefore, the image processing model is trained by using the sample images of the multiple image types, so that the trained image processing model can perform image processing on the multiple types of images, and the applicability of the image processing model is improved.

Wherein the image processing model comprises at least one of a target classification model and a saliency detection model; under the condition that the image processing model is a target classification model, the real information is the real category of the target in the sample image, and the prediction information comprises the prediction category of the target in the sample image; in the case where the image processing model is a saliency detection model, the true information is true position information about a saliency region in the sample image, and the prediction information includes predicted position information about the saliency region in the sample image.

Therefore, the target classification model is trained by using the sample images of the multiple image types, so that the trained target classification model can perform target classification on the multiple types of images, and the applicability of the target classification model is improved. And training the significance detection model by using the sample images of the multiple image types, so that the significance detection model obtained by training can perform significance detection on the multiple types of images, and the applicability of the significance detection model is improved.

The annotation information of the sample image further comprises a real image type of the sample image, and the prediction result of the sample image comprises a prediction image type of the sample image.

Therefore, the parameters of the image processing model are adjusted by combining the real image type of the sample image and the predicted image type of the sample image, so that the distances of the extracted features of the images containing the same target but belonging to different image types in the feature space are closer, and the adjusted image processing model can predict the contents of the images of different image types more accurately.

Adjusting parameters of the image processing model based on the labeling result and the prediction result of each sample image, wherein the adjusting comprises the following steps: obtaining a first loss based on the real information and the prediction information, and obtaining a second loss based on the real image type and the predicted image type; based on the first loss and the second loss, parameters of the image processing model are adjusted.

Therefore, by adjusting the parameters of the image processing model using the first loss between the real information about the content of the sample image and the prediction information of the content thereof and the second loss based on the real image type and the predicted image type, the prediction accuracy of the trained image processing model can be improved.

Wherein adjusting parameters of the image processing model based on the first loss and the second loss comprises: obtaining a loss difference between the first loss and the second loss; and adjusting the parameters of the image processing model by using the loss difference and the second loss.

Therefore, by adjusting the parameters of the image processing model using the loss difference between the first loss and the second loss, the prediction accuracy of the trained image processing model can be improved.

The image processing model is a target classification model, and the target classification model comprises a feature extraction network, a target classification network and an image type classification network; respectively processing each sample image by using the image processing model to obtain a prediction result of each sample image, wherein the prediction result comprises the following steps: carrying out feature extraction on the sample image by using a feature extraction network to obtain sample features; carrying out target classification on the sample characteristics by using a target classification network to obtain prediction information of the sample image; carrying out image type classification on the sample characteristics by using an image type classification network to obtain a predicted image type of the sample image; adjusting parameters of the image processing model using the loss difference and the second loss, comprising: adjusting parameters of the image type classification network by using the second loss; and adjusting parameters of the feature extraction network and the target classification network by using the loss difference.

Therefore, by adjusting the feature extraction network and the target classification network in the image processing model by using the loss difference, the predicted information about the content of the sample image obtained by the image processing model is more accurate, and by adjusting the parameters of the image type classification network by using the second loss, the accuracy of the image type classification network can be improved.

The method comprises the following steps of respectively processing each sample image by using an image processing model to obtain a prediction result of each sample image, and adjusting parameters of the image processing model based on the labeling result and the prediction result of each sample image, and comprises the following steps: selecting a plurality of sample images from a plurality of sample images as current sample images; the image types of the sample images comprise all image types of the sample images; processing the current sample image by using an image processing model to obtain a prediction result in the current sample image; adjusting parameters of an image processing model based on the labeling result and the prediction result of the current sample image; and repeating the steps of selecting a plurality of sample images from the plurality of sample images as the current sample image and the subsequent steps until the image processing model meets the preset requirement.

Therefore, a plurality of sample images are selected from the plurality of sample images to serve as the current sample image, the current sample image is processed by the image processing model, the image processing model is trained in batches, the sample images of all image types are guaranteed to exist in each batch of training, and the training effect of each batch of image processing model can be improved.

The image type comprises one or more of images obtained by shooting the target, hand-drawn pictures and cartoon pictures.

Therefore, the sample images corresponding to the common image types are used for training the image processing model, so that the trained image processing model is more suitable for daily life or work.

The application provides an image processing method, which comprises the following steps: acquiring an image to be processed; and processing the image to be processed by using an image processing model to obtain the prediction information about the content of the image to be processed, wherein the image processing model is obtained by training the image processing model by using the training method.

Therefore, the image processing accuracy can be improved by processing the image to be processed by the image processing model obtained by training using the training method of the image processing model.

Wherein the image processing model comprises at least one of a target classification model and a saliency detection model; under the condition that the image processing model is a target classification model, the prediction information is the prediction category of a target in the image to be processed; in the case where the image processing model is a saliency detection model, the prediction information is prediction position information about a saliency region in the image to be processed.

Therefore, the target classification model obtained by training through the training method of the image processing model is used for processing the image to be processed, and the obtained prediction type of the target is more accurate. And/or the saliency detection model obtained by training through the image processing model training method is used for processing the image to be processed, and the obtained predicted position information about the saliency region is more accurate.

Wherein, in the case that the image processing model is the target classification model, after the image to be processed is processed by the image processing model to obtain the prediction information about the content of the image to be processed, the method further comprises at least one of the following steps: displaying the prediction category on an interface for displaying the image to be processed; selecting the audio matched with the prediction category for playing; selecting a source bone matched with the prediction category, and transferring first animation driving data related to the source bone to a target bone to obtain second animation driving data of the target bone, wherein the target bone is obtained by extracting the bone based on a target in the image to be processed.

Therefore, after the prediction information is obtained, at least one step is also executed to realize the classification result of the image processing model to carry out further intelligent operation.

Wherein, in the case that the image processing model is the saliency detection model, after processing the image to be processed by using the image processing model to obtain the prediction information about the content of the image to be processed, the method further comprises: extracting bones from the salient region by using the predicted position information to obtain target bones; selecting a bone model for the target bone as a source bone; and migrating the first animation driving data related to the source bone to the target bone to obtain second animation driving data of the target bone.

Therefore, the saliency region output by the saliency detection model obtained by training through the image processing model training method is used, and the target skeleton is obtained by extracting the skeleton from the saliency region, so that the obtained target skeleton is more accurate.

The application provides a training device of an image processing model, comprising: the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a plurality of sample images, the image types of the plurality of sample images are at least two, the sample images correspond to an annotation result, and the annotation result of the sample images comprises real information about the content of the sample images; the first image processing module is used for processing each sample image by using the image processing model respectively to obtain a prediction result of each sample image, wherein the prediction result of each sample image comprises prediction information about the content of each sample image; and the adjusting module is used for adjusting the parameters of the image processing model based on the labeling result and the prediction result of each sample image.

The application provides an image processing apparatus, including: the second acquisition module is used for acquiring an image to be processed; and the second image processing module is used for processing the image to be processed by utilizing the image processing model to obtain the prediction information about the content of the image to be processed, wherein the image processing model is obtained by training the image processing model by the training method.

The application provides an electronic device comprising a memory and a processor, wherein the processor is used for executing program instructions stored in the memory so as to realize the training method and/or the image processing method of the image processing model.

The present application provides a computer-readable storage medium having stored thereon program instructions which, when executed by a processor, implement the above-described image processing model training method and/or image processing method.

According to the scheme, the image processing model is trained by using the sample images of the multiple image types, so that the trained image processing model can perform image processing on the multiple images, and the applicability of the image processing model is improved.

It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.

Drawings

The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and, together with the description, serve to explain the principles of the application.

FIG. 1 is a schematic flow chart diagram illustrating an embodiment of a training method for an image processing model according to the present application;

FIG. 2 is a schematic diagram illustrating an image obtained by shooting a target according to an embodiment of the training method for image processing models of the present application;

FIG. 3 is a schematic diagram of a hand drawing shown in an embodiment of a training method for an image processing model of the present application;

FIG. 4 is a schematic diagram of a cartoon diagram shown in an embodiment of the image processing model training method of the present application;

FIG. 5 is a schematic flowchart of an embodiment of an image processing method of the present application;

FIG. 6 is a first diagram illustrating a mapping relationship according to an embodiment of the image processing method of the present application;

FIG. 7 is a second diagram illustrating a mapping relationship according to an embodiment of the image processing method of the present application;

FIG. 8 is a third diagram illustrating a mapping relationship according to an embodiment of the image processing method of the present application;

FIG. 9 is a schematic diagram of an embodiment of an image processing model training apparatus according to the present application;

FIG. 10 is a schematic structural diagram of an embodiment of an image processing apparatus according to the present application;

FIG. 11 is a schematic structural diagram of an embodiment of an electronic device of the present application;

FIG. 12 is a schematic structural diagram of an embodiment of a computer-readable storage medium of the present application.

Detailed Description

The following describes in detail the embodiments of the present application with reference to the drawings attached hereto.

In the following description, for purposes of explanation and not limitation, specific details are set forth such as particular system structures, interfaces, techniques, etc. in order to provide a thorough understanding of the present application.

The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship. Further, the term "plurality" herein means two or more than two. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.

The present application is applicable to an apparatus having an image processing capability. Furthermore, the device may be provided with image capturing or video capturing functionality, e.g. the device may comprise means for capturing images or video, such as a camera. Or the device may obtain the required video stream or image from other devices by means of data transmission or data interaction with other devices, or access the required video stream or image from storage resources of other devices, and the like. For example, the device may perform data transmission or data interaction with other devices through bluetooth, a wireless network, and the like, and the communication method between the device and the other devices is not limited herein, and may include, but is not limited to, the above-mentioned cases. In one implementation, the device may include a cell phone, a tablet, an interactive screen, and the like, without limitation.

Referring to fig. 1, fig. 1 is a schematic flowchart illustrating an embodiment of a training method for an image processing model according to the present application. Specifically, the method may include the steps of:

step S11: the method comprises the steps of obtaining a plurality of sample images, wherein the image types of the plurality of sample images are at least two, the sample images correspond to an annotation result, and the annotation result of the sample images comprises real information about the content of the sample images.

There are various ways to obtain an image of a sample. For example, a storage location of the sample image in an executing device executing the training method is obtained, and then the sample image is obtained by accessing the storage location, or the sample image is obtained from other devices through transmission means such as bluetooth, wireless network, and the like.

The image type of the sample image can be determined according to the representation form of the target in the sample image. For example, the goal is a model of different dimensions, built by hand drawing or with other modeling software, and so on.

Step S12: and processing each sample image by using the image processing model respectively to obtain a prediction result of each sample image, wherein the prediction result of each sample image comprises prediction information about the content of each sample image.

The image processing model includes, but is not limited to, a convolutional neural network model, for example, a MobileNetV3 network may be used as the image processing model to reduce the size of the model and accelerate model prediction, and the image processing model may be more suitable for devices with smaller processing capability, such as mobile terminals, e.g., mobile phones, tablet computers, and the like.

The image processing model can process all sample images simultaneously to obtain a batch of prediction results, and can also process all sample images in a time-sharing manner to obtain the prediction results corresponding to all sample images respectively.

Step S13: and adjusting parameters of the image processing model based on the labeling result and the prediction result of each sample image.

Wherein, the parameters of the image processing model can be adjusted according to the loss between the labeling result and the prediction result of each sample image.

According to the scheme, the image processing model is trained by using the sample images of the multiple image types, so that the trained image processing model can perform image processing on the multiple images, and the applicability of the image processing model is improved.

The image type comprises one or more of images obtained by shooting the target, hand-drawn pictures and cartoon pictures. The images obtained by shooting the target can be divided into visible light images, infrared images and the like. The hand-drawn picture can be a picture drawn by hand on paper and photographed to obtain a hand-drawn picture, or a picture drawn on drawing software, such as a simplified Mickey mouse drawn by a painter on a hand-drawn board. In the embodiment of the present disclosure, the hand-drawn image is further defined as an image with a preset background color and a preset foreground color, and the foreground is formed by monochromatic lines, for example, the background is white and the foreground is Mickey mouse formed by black lines. The cartoon image may be a virtual image having a plurality of foreground colors.

Specifically, for better understanding of the image, the hand drawing, and the cartoon shown in the embodiment of the present disclosure, please refer to fig. 2-4 together, fig. 2 is a schematic diagram showing the image obtained by shooting the target in an embodiment of the training method of the image processing model of the present disclosure, fig. 3 is a schematic diagram showing the hand drawing in an embodiment of the training method of the image processing model of the present disclosure, and fig. 4 is a schematic diagram showing the cartoon shown in an embodiment of the training method of the image processing model of the present disclosure. As shown in fig. 2, fig. 2 is an image photographed on a real apple, fig. 3 is a sketch of an apple drawn on real paper, and fig. 4 is a cartoon image of an apple. The sample images corresponding to the common image types are used for training the image processing model, so that the image processing model obtained by training is more suitable for daily life or work. In the embodiment of the disclosure, ten thousand up and down images obtained by shooting a target, two thousand up and down hand drawings and two thousand up and down cartoon drawings are selected for training.

In some disclosed embodiments, the processing of each sample image by using the image processing model respectively to obtain the prediction result of each sample image, and the manner of adjusting the parameters of the image processing model based on the labeling result and the prediction result of each sample image includes:

a number of sample images are selected from the plurality of sample images as a current sample image. Wherein, several means 1 and above. That is, one of the sample images may be selected as the current sample image from among the plurality of sample images, or two or more sample images may be selected as the current sample image. Further, the image types to which the selected sample images belong include all image types of the plurality of sample images. For example, when the image types of the plurality of sample images collectively include the three image types, the plurality of sample images selected from the plurality of sample images also include the three image types. The number of sample images of each image type may be the same or different. And then, processing the current sample image by using the image processing model to obtain a prediction result of the current sample image. Specifically, the current sample image is used as a batch, and the sample image of the batch is processed by using the image processing model to obtain a batch prediction result. And adjusting parameters of the image processing model based on the labeling result and the prediction result of the current sample image. Optionally, the parameters of the model may be adjusted by using the loss between each labeled result and the corresponding predicted result in one batch, which requires adjusting the parameters several times, or the parameters of the model may be adjusted by combining the loss between each labeled result and the corresponding predicted result, which only requires adjusting the parameters of the model once. And repeatedly executing the steps of selecting a plurality of sample images from the plurality of sample images as the current sample image and the subsequent steps until the image processing model meets the preset requirement. The preset requirement here may be the error magnitude between the prediction result given by the model and the labeling result. The specific error magnitude is determined according to actual requirements and is not specified here. Alternatively, several sample images selected at a time from among the plurality of sample images may be the same as the partial sample image selected last time. In other disclosed embodiments, the number of sample images selected from the plurality of sample images at a time is different. The method comprises the steps of selecting a plurality of sample images from a plurality of sample images as current sample images, processing the current sample images by using an image processing model to train the image processing model in batches, and ensuring that all image types of the sample images exist in each batch of training, so that the training effect of each batch on the image processing model can be improved.

In some disclosed embodiments, the sample image may or may not be pre-processed.

The preprocessing mode may be at least one of gaussian blurring, clipping, and rotation. The main functions of the gaussian blurring processing are to reduce image noise and reduce detail levels, and the main method is to selectively blur an image by adjusting pixel color values according to a gaussian curve. Cropping refers to cropping the training sample image into images of different sizes, for example, cropping the training sample image into images of sizes 1024 × 2048 or 512 × 512, although this size is merely an example, and in other embodiments, it is fully possible to adopt cropping into images of other sizes, and therefore, no specific specification is made here as to the size of the cropping. The rotation may be a rotation of the training sample image by 90 °, 180 °, or 270 °. Of course, in other embodiments, the preprocessing manner may also be to adjust the resolution, etc.

In some disclosed embodiments, the image processing model includes at least one of an object classification model and a saliency detection model. The target classification model is used to classify the target in the sample image, for example, the real information of the target in the sample image is an apple, and the preset information about the content of the sample image included in the prediction result of the target classification model may be the prediction classification information of the target, for example, the apple. The saliency detection model can be used to detect the position of a saliency region in a sample image.

And in the case that the image processing model is a target classification model, the real information is the real category of the target in the sample image, and the prediction information comprises the prediction category of the target in the sample image.

In the case where the image processing model is a saliency detection model, the true information is true position information about a saliency region in the sample image, and the prediction information includes a prediction category of an object in the sample image.

The target classification model is trained by using the sample images of the multiple image types, so that the trained target classification model can perform target classification on the multiple types of images, and the applicability of the target classification model is improved. And training the significance detection model by using the sample images of the multiple image types, so that the significance detection model obtained by training can perform significance detection on the multiple types of images, and the applicability of the significance detection model is improved.

In some disclosed embodiments, under the condition that the image processing model is the significance detection model, the sample images belonging to the hand drawing are screened, and the sample images with low quality are removed. Specifically, according to the real position information of the salient region in the sample image, the missing condition of the outline of the salient region is determined for screening, and the sample image with the missing condition not meeting the preset requirement is removed. By screening the hand drawing, the outline of the significance region is kept to be a complete hand drawing, so that the detection result of the trained significance detection model is more accurate.

In some disclosed embodiments, the annotation information of the sample image further includes a real image type of the sample image, and the prediction result of the sample image includes a predicted image type of the sample image, and the step S13 may specifically include: the parameters of the image processing model are adjusted using a difference between true information about the content of the sample image and prediction information of the content thereof, and a difference between a true image type of the sample image and a predicted image type of the sample image. And when the image processing model is the target classification model, the prediction result of the image processing model comprises the prediction type of the target and the prediction image type of the sample image. In the case where the image processing model is a saliency detection model, the prediction information is a prediction type of a target in the sample image and a prediction image type of the sample image. By adjusting the parameters of the image processing model by using the real information about the content of the sample image and the prediction information of the content thereof, as well as the real image type of the sample image and the predicted image type of the sample image, the distances of the extracted features of the images containing the same target but belonging to different image types in the feature space can be made closer, and the adjusted image processing model can predict the content of the images of different image types more accurately.

In some disclosed embodiments, based on the labeling result and the prediction result of each sample image, the manner of adjusting the parameters of the image processing model may be: a first loss is derived based on the real information and the prediction information, and a second loss is derived based on the real image type and the predicted image type. Then, parameters of the image processing model are adjusted based on the first loss and the second loss.

Specifically, based on the error between the real information and the predicted information, the first loss is obtained. And deriving a second loss based on an error between the real image type and the predicted image type. Specifically, a first loss is determined by combining the error between a batch of prediction information and corresponding annotation information, and a second loss is determined by combining the error between a batch of predicted image types and actual image types. And adjusting the parameters of the image processing model by combining the first loss and the second loss. By adjusting parameters of the image processing model using a first loss between real information about the content of the sample image and prediction information of the content thereof and a second loss based on the real image type and the predicted image type, prediction accuracy of the trained image processing model can be improved.

Specifically, the first loss optimizes the parameters of the model, so that the predicted information obtained by the image processing model is closer to the real information, that is, the error between the two becomes smaller. And adjusting the parameters of the model by using the second loss to enable the distance of the feature vectors of the images which represent the same object but belong to different image types to be closer in the feature space, so that the feature vectors of the images of different image types are all in the closer feature space. For example, the trained image processing model has a closer distance in the feature space to feature vectors obtained by extracting features from a hand drawing and a cartoon drawing representing an apple and an image obtained by shooting an apple.

In some disclosed embodiments, the manner of adjusting the parameters of the image processing model based on the first loss and the second loss may be: a loss difference between the first loss and the second loss is obtained. And then adjusting parameters of the image processing model by using the loss difference and the second loss. Specifically, the loss difference is obtained by making a difference between the first loss and the second loss. The adjusting the parameters of the image processing model by using the first loss difference and the second loss difference may be adjusting the parameters of the model by using one of the losses and then adjusting the parameters of the model by using the other loss. By adjusting the parameters of the image processing model using the loss difference between the first loss and the second loss, the prediction accuracy of the trained image processing model can be improved.

In some disclosed embodiments, the image processing model is a target classification model. The target classification model comprises a feature extraction network, a target classification network and an image type classification network.

The method for processing each sample image by using the image processing model to obtain the prediction result of each sample image may be as follows: and carrying out feature extraction on the sample image by using a feature extraction network to obtain sample features. And carrying out target classification on the sample characteristics by using a target classification network to obtain the prediction information of the sample image. And carrying out image type classification on the sample characteristics by using an image type classification network to obtain the predicted image type of the sample image. Further, the sample features extracted by the feature extraction network are input into a target classification network to obtain prediction information about the sample image, and the sample features extracted by the feature extraction network are input into an image type classification network to obtain a prediction image type about the sample image. The method for adjusting the parameters of the image processing model by using the loss difference and the second loss may be: and adjusting the parameters of the image type classification network by using the second loss. And adjusting parameters of the feature extraction network and the target classification network by using the loss difference. The way of adjusting the parameter using the loss difference and the second loss is both forward adjustment. By adjusting the feature extraction network and the target classification network in the image processing model by using the loss difference, the prediction information about the content of the sample image obtained by the image processing model is more accurate, and the accuracy of the image type classification network can be improved by adjusting the parameters of the image type classification network by using the second loss.

In some disclosed embodiments, the trained image processing model can be deployed at the mobile phone for image processing. The image processing method can also be applied to software such as content sharing, video, live broadcast, entertainment, education, games and the like.

According to the scheme, the image processing model is trained by using the sample images of the multiple image types, so that the trained image processing model can perform image processing on the multiple images, and the applicability of the image processing model is improved.

For example, the training method of the image processing model may be performed by a terminal device or a server or other processing device, where the terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle-mounted device, a wearable device, and the like. In some possible implementations, the training method of the image processing model may be implemented by a processor calling computer readable instructions stored in a memory.

Referring to fig. 5, fig. 5 is a schematic flowchart illustrating an embodiment of an image processing method according to the present application.

As shown in fig. 5, an image processing method provided by the embodiment of the present disclosure includes the following steps:

step S21: and acquiring an image to be processed.

The method for acquiring the image to be processed may be various, for example, the image to be processed may be acquired by shooting through an imaging component in an execution device executing the image processing method, or may be acquired from another device according to various communication methods. Wherein, the image type of the image to be processed can be one of a plurality of image types. For example, the image type of the image to be processed may be one or more of an image obtained by shooting a target and a hand-drawn cartoon. In some disclosed embodiments, the image to be processed may also be obtained from the video. For example, a piece of video is input into an image processing model, and the image processing model acquires each frame of video frame in the video and takes each frame of video frame as an image to be processed.

Step S22: and processing the image to be processed by using the image processing model to obtain the prediction information about the content of the image to be processed, wherein the image processing model is obtained by training the training method of the image processing model.

Wherein the image processing model is trained using sample images of a plurality of image types. Specifically, an image to be processed is input to an image processing model from an input end of the image processing model. The image processing model processes the image to be processed to obtain the prediction information of the image to be processed.

According to the scheme, the image processing model obtained by training through the training method of the image processing model is used for processing the image to be processed, so that the accuracy of image processing can be improved.

In some disclosed embodiments, the image processing model includes at least one of an object classification model and a saliency detection model. And in the case that the image processing model is a target classification model, the prediction information is the prediction category of the target in the image to be processed. In the case where the image processing model is a saliency detection model, the prediction information is prediction position information about a saliency region in the image to be processed. The target classification model obtained by training by using the training method of the image processing model is used for processing the image to be processed, so that the obtained prediction type of the target is more accurate. And/or the saliency detection model obtained by training through the image processing model training method is used for processing the image to be processed, and the obtained predicted position information about the saliency region is more accurate.

As described in the above embodiment, in the case where the image processing model is the object classification model, the image processing model includes a feature extraction network, an object classification network, and an image type classification network. After training is complete, the image classification network may be removed or disconnected from the feature extraction network. That is, in the embodiment of the present disclosure, only the feature extraction network and the target classification network are used therein, and the output of the feature extraction network is used as the input of the target classification network.

In some disclosed embodiments, in a case that the image processing model is the target classification model, after the image to be processed is processed by the image processing model to obtain the prediction information about the content of the image to be processed, the image processing method further includes at least one of the following steps:

1. and displaying the prediction category on an interface for displaying the image to be processed. The display modes include various modes, for example, the prediction category is marked on the image to be processed, so that the image to be processed and the corresponding prediction category are displayed on the display interface together, and of course, the image to be processed and the corresponding prediction category may also be displayed in different areas of the display interface respectively. In some disclosed embodiments, if there are two or more images to be processed, the corresponding images to be processed and the prediction categories thereof may be displayed in different areas of the display interface, or the images to be processed and the prediction categories thereof may be displayed in a page-turning manner. When the image to be processed is obtained from the video, judging whether the prediction information of the video frames of the continuous preset number of frames is the same, if so, determining that the prediction information is correct. If not, the prediction information is considered to be incorrect. The correct prediction information can be selected to be output, the wrong prediction information can not be output, and the correct and wrong prediction information can be selected to be correspondingly annotated and output. The preset number of frames may be 5 frames, 10 frames, etc., and may be determined according to a specific usage scenario.

2. And selecting the audio matched with the prediction category for playing. Wherein, the matching relation between the audio and the prediction category can be embodied. And after the image processing model obtains the prediction category of the content in the image to be processed, the corresponding audio can be played.

3. And selecting a source bone matched with the prediction category, and migrating first animation driving data related to the source bone to a target bone to obtain second animation driving data of the target bone. The target skeleton is obtained by extracting the skeleton based on the target in the image to be processed. The source bone matching the prediction category may be selected by matching the prediction category of the target with the categories of the source bones stored in the database to obtain the source bone matching the prediction category. For example, if the prediction category is cat, the database may be searched for a corresponding source skeleton using cat as a keyword.

In some disclosed embodiments, in a case that the image processing model is a saliency detection model, after the image to be processed is processed by the image processing model to obtain prediction information about the content of the image to be processed, the method further includes the following steps:

and (4) extracting the bones of the significant area by using the predicted position information to obtain the target bones. And selecting a bone model for the target bone as the source bone. Animation data is provided on the source skeleton. Then, the first animation driving data related to the source bone is migrated to the target bone, and second animation driving data of the target bone is obtained.

In some disclosed embodiments, the step of extracting the bones from the salient region by using the predicted position information to obtain the target bones may be: and extracting the outline of the salient region to obtain the outline of the target, and generating a three-dimensional grid model for the target by using the outline. And finally, extracting the target skeleton from the three-dimensional grid model.

The method for obtaining the source bone may specifically be to classify the image to be processed to obtain a category of the target object, and select a bone model matched with the category as the source bone. The target bone may be understood as a bone of the target object, i.e. the type of the target object may be understood as the type of the target bone. Specifically, embodiments of the present disclosure may employ predictive label mapping as well as dataset label mapping. The classification result of the target object by the prediction label mapping includes a predicted bone topology type of the target object, for example, the predicted bone topology type includes two feet, four feet, and the like. That is, the process of predicting the tag mapping is primarily to predict the skeletal topology characteristics of the target object. The classification result of the data set label mapping needs to give a specific category of the target object in the input image, for example, the target object is a cat, a dog, a panda, a koala, and the like. In the embodiment of the disclosure, the prediction label mapping is selected, and in a specific application process, if the target object is a panda, the class of the target object given by the prediction label mapping is quadruped, a bone model matched with the class is selected as a source bone, and if the selected source bone is a quadruped koala. Although pandas and koala differ, they actually have approximately the same skeletal topology, and therefore, migrating the motion-driven data of a koala to a panda can also occur in a natural, rational fashion. That is, although the completely correct class of the target object cannot be obtained by the predictive label mapping, the driving of the final target skeleton is not affected. Meanwhile, the specific category of the target object is not further known by the prediction label mapping, so that the calculation cost is reduced.

After a source bone matched with the target bone is determined, bone node mapping is carried out between the source bone and the target bone to obtain a node mapping relation between the source bone and the target bone. In some disclosed embodiments, the manner of obtaining the node mapping relationship between the two may be: the number of bone branches at which each node in the source and target bones is located is determined. And sequentially mapping the nodes in the source skeleton and the target skeleton according to the sequence of the number of the skeleton branches from high to low. The node with the largest number of located bone branches is generally called a root node. The number of bone branches where the node is located is referred to as degree. Namely, the mapping relation between two nodes with larger degrees in the skeleton is constructed firstly, and then the mapping relation between nodes with smaller degrees is constructed. Alternatively, the mapping may be performed using the principle that the error value of the bone branch mapping is the smallest. Wherein if the number of nodes between the source bone and the target bone is different, a least many-to-one mapping with the lowest cost is selected. For example, the mapping may be performed by performing a one-to-one joint match in a sequence in which many-to-one or skip mapping occurs.

The source skeleton is selected from the skeleton models matched with the categories by obtaining the categories of the target objects, so that the method is convenient and quick. By mapping the bones and the nodes in the target bones in sequence according to the order of the number of the bone branches from large to small, the mapping accuracy can be improved.

In some disclosed embodiments, the final target bone is consistent with the node topology of the source bone. Or, the nodes between the final target bone and the final source bone are mapped one by one. That is, the node topology of the final target bone and the node topology of the final source bone may exist in two forms, one is that the node topology of the final target bone is completely consistent with that of the final source bone, and the other is that the nodes in the final target bone have the nodes of the final source bone corresponding to the nodes, but some nodes which do not construct a mapping relationship exist in the final source bone. That is, it is necessary to ensure that all nodes of the final target skeleton have corresponding animation driving data after animation migration.

After the node mapping relationship between the two nodes is obtained, the topological structure alignment and the node alignment are carried out.

The manner of performing topology alignment may include at least one of the following:

one is to update the node topology of one of the bones in the case that there are multiple nodes between the source bone and the target bone mapped to the same node. Wherein the nodes between the two bones after updating are mapped one by one. The condition that a plurality of nodes between two skeletons are mapped on the same node can be adjusted to be one-to-one mapping of the nodes between the two skeletons by updating the node topological structure of the skeletons, so that unreasonable conditions in the process of driving the final target skeleton by subsequent animation are reduced.

Wherein, updating the node topology of one of the bones can be divided into a plurality of cases: the first case is to update the first skeleton where multiple nodes are located, in case multiple nodes are located in the same skeleton branch. Wherein one of the first bone and the second bone is a source bone, and the other is a target bone. By updating the first skeleton where the plurality of nodes are located, the condition that the plurality of nodes between the two skeletons are mapped on the same node is adjusted to be one-to-one mapping of the nodes between the two skeletons, and further unreasonable conditions in the process of driving the final target skeleton by subsequent animation are reduced. Optionally, the manner of updating the first skeleton in which the plurality of nodes are located may be to merge the plurality of nodes in the first skeleton into one first node. And the first node reserves the mapping relation of a plurality of nodes before merging. And the position of the first node is taken as the average of the positions of all the merged nodes.

Referring to fig. 6, fig. 6 is a first schematic diagram illustrating a mapping relationship in an embodiment of the image processing method of the present application. As shown in fig. 6, the second node and the third node in the target bone are simultaneously mapped to the second node in the source bone. In this case, the second node and the third node in the target bone are merged into one first node. Wherein the position of the first node is an average of the positions of the second node and the third node in the target bone. When the first skeleton is a source skeleton, nodes in the source skeleton carry animation driving data, so that after the nodes are combined, the animation driving data of the first node needs to be acquired, and at this time, the animation driving data of all the combined nodes can be combined. Specifically, the animation driving data may be generally represented by a matrix, and the combination of the matrices may be represented by matrix multiplication, that is, the animation driving data is multiplied, so that the animation driving data of the first node is obtained. The second case is to update a second skeleton that does not include multiple nodes, in the case where the multiple nodes are located in different skeletal branches. Wherein one of the first bone and the second bone is a source bone, and the other is a target bone. Optionally, a second node is found in the first skeleton where the skeletal branches where the plurality of nodes are located meet. The specific way may be that parent nodes traverse in turn, thereby obtaining a second node. And finding a third node in the second skeleton that maps to the second node. And then, finding out a node topological structure corresponding to the plurality of nodes, and adding at least one bone branch at a third node. In the embodiment of the present disclosure, the parent node of a node refers to a node adjacent to the node and closer to the root node than the node in a skeletal branch. And the plurality of nodes are mapped with nodes in the newly added skeleton branch and the original skeleton branch at the third node one by one. Wherein the newly added bone branch may be a duplicate of the original bone branch. The copied content includes the animation data, as well as the transformation relationships between the node and its parent. For example, if the original skeleton branch includes three nodes, the newly added skeleton branch also includes three nodes, and the animation driving data of the three nodes in the newly added skeleton branch is obtained by copying the animation data of the corresponding nodes in the original skeleton branch.

Referring to fig. 7, fig. 7 is a second schematic diagram illustrating a mapping relationship in an embodiment of the image processing method of the present application. As shown in fig. 7, the node topology on the left side is the node topology of the source bone, and the node topology on the right side is the node topology of the target bone. In fig. 3, a first node of a target bone is mapped to a first node of a source bone, a second node of the target bone is mapped to a second node of the source bone, and the second node of the target bone includes two branches, i.e., a left branch and a right branch, below the second node of the target bone, wherein the first node of the left branch and the first node of the right branch are mapped to a third node of the source bone, and the second node of the left branch and the second node of the right branch are mapped to a fourth node of the source bone. This also occurs when two nodes in the target bone map to a third node of the source bone and belong to different branches, and two nodes in the target bone map to a fourth node of the source bone and belong to different branches. Wherein the two branches converge at a second node of the target bone. And finding a second node in the source skeleton, which is mapped to the target skeleton, as the second node. And adding a bone branch at the second node of the source bone according to the node topological structures corresponding to the two nodes of the target bone. Wherein, the number of nodes in one newly added bone branch is two. At this time, all nodes in the target skeleton correspond to nodes in the source skeleton one to one. Therefore, the node topology of the first skeleton can be maximally reserved under the condition that the node one-to-one mapping is realized.

And secondly, under the condition that the skeleton has no mapping relation, updating the node topological structure of the skeleton where the nodes without the mapping relation are located. Wherein, the two bones comprise a source bone and a target bone, and the nodes between the two bones after being updated are mapped one by one. By updating the node topological structure of the skeleton where the nodes without mapping relation are located, the nodes without mapping relation are reduced, the updated nodes between the two skeletons are mapped one by one, and unreasonable conditions in the process of driving the final target skeleton by subsequent animation are reduced. Optionally, nodes without mapping relations are merged to adjacent nodes with mapping relations. And the adjacent nodes are father nodes or child nodes of nodes without mapping relations in the skeleton. In the embodiment of the disclosure, nodes without mapping relation are merged to parent nodes thereof.

Referring to fig. 8, fig. 8 is a third schematic diagram illustrating a mapping relationship according to an embodiment of the image processing method of the present application. As shown in fig. 8, a first node of the target bone maps to a first node of the source bone, a second node of the target bone maps to a third node of the source bone, and the third node of the target bone maps to a fourth node of the source bone. Wherein the second node of the source skeleton has no mapping relationship. The second node of the source bone may be merged to its parent node, i.e., to the first node of the source bone. Of course, the merging of nodes in the source skeleton is accompanied by merging of animation-driven data, and the merging of animation-driven data is not described herein.

The node alignment is performed mainly for determining a first pose transformation relationship between a source bone and a target bone.

Specifically, according to the sequence from the root source node to the leaf source node, aligning each source node in the final source skeleton with the corresponding mapped target node in the final target skeleton, so as to obtain a first posture transformation relationship between each source node and the mapped target node. As described above, the root node is the node with the largest number of skeletal branches. The root source node refers to the root node in the final source skeleton and, similarly, the root target node refers to the root node of the final target skeleton. The final source bone and the final target bone refer to the source bone and the target bone after topological alignment. Where a leaf node refers to a node that has a parent node but no child nodes. Leaf source nodes refer to leaf nodes in the final source skeleton, and leaf target nodes refer to leaf nodes in the final target skeleton.

The first pose transformation relation is a transformation relation of the source node and the mapped target node in the first coordinate system. And the offset between the root source node of the final source skeleton and the root target node of the final target skeleton can be obtained by translating the root source node of the final source skeleton and the root target node of the final target skeleton to the origin of the first coordinate system. Specifically, for each source node in the final source skeleton, the offset required to align the source node to the mapped target node is obtained. Wherein the offset comprises a translational component and a rotational component. Generally, the translation component includes a scaling component. And then, based on the offset corresponding to the source node, obtaining a first position and posture transformation relation of the source node.

And obtaining a first position and posture transformation relation of the source node based on the offset corresponding to the source node. Specifically, the first pose transformation relationship of the source node is obtained based on offsets corresponding to the source node and a superior node of the source node, respectively. The upper nodes of the source node are a first father node and a root node of the source node in the final source skeleton and nodes between the first father node and the root node. The offset values may be represented by a matrix, and specifically, the first bit-position transformation relationship of the source node may be obtained by performing matrix multiplication on the offset values corresponding to the source node and the higher-level node of the source node.

If the topological structure of the source skeleton changes, the animation data on the source skeleton correspondingly changes. For example, if two source nodes in the source skeleton are merged, animation data corresponding to the nodes are also merged.

Therefore, the animation data on the source skeleton can be migrated to the target skeleton to drive the target in the image to be processed to move.

And after the prediction information is obtained, at least one step is executed to realize the classification result of the image processing model to carry out further intelligent operation.

And a saliency region output by the saliency detection model obtained by training with the image processing model training method is used, and the target skeleton is obtained by extracting the skeleton from the saliency region, so that the obtained target skeleton is more accurate.

According to the scheme, the image processing model obtained by training through the training method of the image processing model is used for processing the image to be processed, so that the accuracy of image processing can be improved.

The main body of the image processing method may be an image processing apparatus, for example, the image processing method may be executed by a terminal device or a server or other processing device, where the terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle-mounted device, a wearable device, or the like. In some possible implementations, the image processing method may be implemented by a processor calling computer readable instructions stored in a memory.

Referring to fig. 9, fig. 9 is a schematic structural diagram of an embodiment of a training apparatus for an image processing model according to the present application. The training apparatus 30 for image processing model includes a first obtaining module 31, a first image processing module 32, and an adjusting module 33. The first obtaining module 31 is configured to obtain a plurality of sample images, where the image types of the plurality of sample images are at least two, the sample images correspond to an annotation result, and the annotation result of the sample image includes real information about the content of the sample image; a first image processing module 32, configured to process each sample image by using an image processing model, respectively, to obtain a prediction result of each sample image, where the prediction result of the sample image includes prediction information about the content of the sample image; and an adjusting module 33, configured to adjust parameters of the image processing model based on the labeling result and the prediction result of each sample image.

According to the scheme, the image processing model is trained by using the sample images of the multiple image types, so that the trained image processing model can perform image processing on the multiple images, and the applicability of the image processing model is improved.

In some disclosed embodiments, the image processing model includes at least one of a target classification model and a saliency detection model; under the condition that the image processing model is a target classification model, the real information is the real category of the target in the sample image, and the prediction information comprises the prediction category of the target in the sample image; in the case where the image processing model is a saliency detection model, the true information is true position information about a saliency region in the sample image, and the prediction information includes predicted position information about the saliency region in the sample image.

According to the scheme, the target classification model is trained by using the sample images of various image types, so that the trained target classification model can perform target classification on various images, and the applicability of the target classification model is improved. And training the significance detection model by using the sample images of the multiple image types, so that the significance detection model obtained by training can perform significance detection on the multiple types of images, and the applicability of the significance detection model is improved.

In some disclosed embodiments, the annotation information of the sample image further includes a true image type of the sample image, and the prediction result of the sample image includes a predicted image type of the sample image.

According to the scheme, the parameters of the image processing model are adjusted by combining the real image type of the sample image and the predicted image type of the sample image, so that the distances of the extracted features of the images containing the same target but belonging to different image types in the feature space are closer, and the adjusted image processing model can predict the contents of the images of different image types more accurately.

In some disclosed embodiments, the adjusting module 33 adjusts parameters of the image processing model based on the labeling result and the prediction result of each sample image, including: obtaining a first loss based on the real information and the prediction information, and obtaining a second loss based on the real image type and the predicted image type; based on the first loss and the second loss, parameters of the image processing model are adjusted.

In the above-described aspect, by adjusting the parameters of the image processing model using the first loss between the real information about the content of the sample image and the prediction information of the content thereof and the second loss based on the real image type and the predicted image type, the prediction accuracy of the trained image processing model can be improved.

In some disclosed embodiments, the adjusting module 33 adjusts parameters of the image processing model based on the first loss and the second loss, including: obtaining a loss difference between the first loss and the second loss; and adjusting the parameters of the image processing model by using the loss difference and the second loss.

According to the scheme, the prediction accuracy of the trained image processing model can be improved by adjusting the parameters of the image processing model by using the loss difference between the first loss and the second loss.

In some disclosed embodiments, the image processing model is a target classification model, the target classification model including a feature extraction network, a target classification network, and an image type classification network; the first image processing module 32 processes each sample image by using the image processing model to obtain a prediction result of each sample image, and includes: carrying out feature extraction on the sample image by using a feature extraction network to obtain sample features; carrying out target classification on the sample characteristics by using a target classification network to obtain prediction information of the sample image; carrying out image type classification on the sample characteristics by using an image type classification network to obtain a predicted image type of the sample image; the adjusting module 33 adjusts parameters of the image processing model by using the loss difference and the second loss, and includes: adjusting parameters of the image type classification network by using the second loss; and adjusting parameters of the feature extraction network and the target classification network by using the loss difference.

According to the scheme, the loss difference is used for adjusting the feature extraction network and the target classification network in the image processing model, so that the prediction information about the content of the sample image obtained by the image processing model is more accurate, and the second loss is used for adjusting the parameters of the image type classification network, so that the accuracy of the image type classification network can be improved.

In some disclosed embodiments, the first image processing module 32 processes each sample image by using an image processing model to obtain a prediction result of each sample image, and adjusts parameters of the image processing model based on the labeling result and the prediction result of each sample image, including: selecting a plurality of sample images from a plurality of sample images as current sample images; the image types of the sample images comprise all image types of the sample images; processing the current sample image by using an image processing model to obtain a prediction result in the current sample image; the adjusting module 33 adjusts parameters of the image processing model based on the labeling result and the prediction result of the current sample image; and repeating the steps of selecting a plurality of sample images from the plurality of sample images as the current sample image and the subsequent steps until the image processing model meets the preset requirement.

According to the scheme, a plurality of sample images are selected from a plurality of sample images to serve as the current sample image, the current sample image is processed by the image processing model, the image processing model is trained in batches, the sample images of all image types are guaranteed to exist in each batch of training, and the training effect of each batch of image processing model can be improved.

In some disclosed embodiments, the image types include one or more of images taken of the subject, hand-drawn images, cartoon images.

According to the scheme, the sample images corresponding to the common image types are used for training the image processing model, so that the image processing model obtained through training is more suitable for daily life or work.

According to the scheme, the image processing model is trained by using the sample images of the multiple image types, so that the trained image processing model can perform image processing on the multiple images, and the applicability of the image processing model is improved.

Referring to fig. 10, fig. 10 is a schematic structural diagram of an embodiment of an image processing apparatus according to the present application. The image processing apparatus 40 includes a second acquisition module 41 and a second image processing module 42. A second obtaining module 41, configured to obtain an image to be processed; and the second image processing module 42 is configured to process the image to be processed by using an image processing model, so as to obtain prediction information about the content of the image to be processed, where the image processing model is obtained by training the image processing model by using the above-mentioned training method.

According to the scheme, the image processing model obtained by training through the training method of the image processing model is used for processing the image to be processed, so that the accuracy of image processing can be improved.

In some disclosed embodiments, the image processing model includes at least one of a target classification model and a saliency detection model; under the condition that the image processing model is a target classification model, the prediction information is the prediction category of a target in the image to be processed; in the case where the image processing model is a saliency detection model, the prediction information is prediction position information about a saliency region in the image to be processed.

According to the scheme, the target classification model obtained by training through the image processing model training method is used for processing the image to be processed, and the obtained prediction type of the target is more accurate. And/or the saliency detection model obtained by training through the image processing model training method is used for processing the image to be processed, and the obtained predicted position information about the saliency region is more accurate.

In some disclosed embodiments, in the case that the image processing model is the target classification model, after the image to be processed is processed by the image processing model to obtain the prediction information about the content of the image to be processed, the second image processing module 42 is further configured to perform at least one of the following steps: displaying the prediction category on an interface for displaying the image to be processed; selecting the audio matched with the prediction category for playing; selecting a source bone matched with the prediction category, and transferring first animation driving data related to the source bone to a target bone to obtain second animation driving data of the target bone, wherein the target bone is obtained by extracting the bone based on a target in the image to be processed.

According to the scheme, after the prediction information is obtained, the at least one step is executed, so that the classification result of the image processing model is further intelligently operated.

In some disclosed embodiments, in the case that the image processing model is a saliency detection model, the second image processing module 42, after processing the image to be processed by using the image processing model to obtain the prediction information about the content of the image to be processed, is further configured to: extracting bones from the salient region by using the predicted position information to obtain target bones; selecting a bone model for the target bone as a source bone; and migrating the first animation driving data related to the source bone to the target bone to obtain second animation driving data of the target bone.

According to the scheme, the saliency region output by the saliency detection model is obtained by training through the image processing model training method, and the target skeleton is obtained by extracting the skeleton from the saliency region, so that the obtained target skeleton is more accurate.

Referring to fig. 11, fig. 11 is a schematic structural diagram of an embodiment of an electronic device according to the present application. The electronic device 50 comprises a memory 51 and a processor 52, the processor 52 being configured to execute program instructions stored in the memory 51 to implement the steps in any of the above-described embodiments of the training method of the image processing model and/or the steps in the embodiments of the image processing method. In one particular implementation scenario, electronic device 50 may include, but is not limited to: medical equipment, a microcomputer, a desktop computer, a server, and the electronic equipment 50 may also include mobile equipment such as a notebook computer, a tablet computer, and the like, which is not limited herein.

In particular, the processor 52 is configured to control itself and the memory 51 to implement the steps in any of the above-described embodiments of the training method of the image processing model. Processor 52 may also be referred to as a CPU (Central Processing Unit). Processor 52 may be an integrated circuit chip having signal processing capabilities. The Processor 52 may also be a general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. In addition, the processor 52 may be commonly implemented by an integrated circuit chip.

According to the scheme, the image processing model is trained by using the sample images of the multiple image types, so that the trained image processing model can perform image processing on the multiple images, and the applicability of the image processing model is improved.

Referring to fig. 12, fig. 12 is a schematic structural diagram of an embodiment of a computer-readable storage medium according to the present application. The computer readable storage medium 60 stores program instructions 61 executable by the processor, the program instructions 61 for implementing the steps in any of the above-described embodiments of the training method of the image processing model and/or the steps in the embodiments of the image processing method.

According to the scheme, the image processing model is trained by using the sample images of the multiple image types, so that the trained image processing model can perform image processing on the multiple images, and the applicability of the image processing model is improved.

In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.

The foregoing description of the various embodiments is intended to highlight various differences between the embodiments, and the same or similar parts may be referred to each other, and for brevity, will not be described again herein.

In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a module or a unit is merely one type of logical division, and an actual implementation may have another division, for example, a unit or a component may be combined or integrated with another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some interfaces, and may be in an electrical, mechanical or other form.

In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.

24页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:用于超维计算的强度值向量表生成方法和图像编码方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!