Method, device and equipment for constructing image recognition model and storage medium

文档序号:191458 发布日期:2021-11-02 浏览:20次 中文

阅读说明:本技术 图像识别模型的构建方法、装置、设备以及存储介质 (Method, device and equipment for constructing image recognition model and storage medium ) 是由 张婉平 于 2021-07-28 设计创作,主要内容包括:本公开提供了一种图像识别模型的构建方法、装置、设备以及存储介质,涉及人工智能技术领域,具体为计算机视觉和深度学习技术领域,可应用于人脸识别等场景。该方法包括:获取输入图像集合;利用输入图像集合对初始超分辨模型和初始识别模型进行联合训练,得到训练完成的超分辨模型和识别模型;将训练完成的超分辨模型和识别模型以级联的方式组合,得到图像识别模型。本公开的图像识别模型的构建方法提升了图像识别模型对低质量图像数据的鲁棒性。(The disclosure provides a construction method, a device, equipment and a storage medium of an image recognition model, relates to the technical field of artificial intelligence, in particular to the technical field of computer vision and deep learning, and can be applied to scenes such as face recognition. The method comprises the following steps: acquiring an input image set; performing joint training on the initial super-resolution model and the initial recognition model by using the input image set to obtain a trained super-resolution model and a trained recognition model; and combining the trained super-resolution model and the recognition model in a cascading manner to obtain the image recognition model. The construction method of the image recognition model improves the robustness of the image recognition model to low-quality image data.)

1. A construction method of an image recognition model comprises the following steps:

acquiring an input image set;

performing joint training on the initial super-resolution model and the initial recognition model by using the input image set to obtain a trained super-resolution model and a trained recognition model;

and combining the trained super-resolution model and the recognition model in a cascading manner to obtain an image recognition model.

2. The method of claim 1, wherein the jointly training an initial super-resolution model and an initial recognition model with the set of input images comprises:

calculating a loss function of an initial super-resolution model by using the input image set and a recovery image set corresponding to the input image set, and updating parameters of the initial super-resolution model by adopting a gradient descent method;

calculating a loss function of an initial recognition model based on distances between features of the images in the input image set and the restored image set, and updating parameters of the initial recognition model by adopting a gradient descent method.

3. The method of claim 2, wherein the computing a loss function of an initial super-resolution model using a set of restored images of the set of input images corresponding to the set of input images comprises:

down-sampling the images in the input image set to obtain a down-sampled image set;

restoring the images in the down-sampling image set by using the initial super-resolution model to obtain a restored image set;

calculating a reconstruction loss of the initial super-resolution model based on the set of input images and the set of restored images.

4. The method of claim 3, wherein the computing a loss function for an initial recognition model based on distances between features of images in the input set of images and the restored set of images comprises:

merging the input image set, the downsampling image set and the restored image set to obtain a target image set;

extracting features of images in the target image set;

calculating distances between features of images in the set of target images;

calculating a binary loss function of the initial recognition model based on the distances.

5. The method of any one of claims 2-4, wherein the gradient descent method is a random gradient descent method.

6. The method of claim 1, wherein said combining the trained super-resolution model and recognition model in a cascaded manner comprises:

and connecting the output end of the part before the loss function in the trained super-resolution model to the input end of the recognition model.

7. An image recognition method, comprising:

acquiring an image to be identified;

inputting the image to be recognized into an image recognition model, and outputting a recognition result corresponding to the image to be recognized, wherein the image recognition model is obtained by the method for constructing the image recognition model according to any one of claims 1 to 6.

8. An apparatus for constructing an image recognition model, comprising:

a first acquisition module configured to acquire a set of input images;

the training module is configured to perform joint training on the initial super-resolution model and the initial recognition model by using the input image set to obtain a trained super-resolution model and a trained recognition model;

and the combination module is configured to combine the trained super-resolution model and the recognition model in a cascading manner to obtain an image recognition model.

9. The apparatus of claim 8, wherein the training module comprises:

a first updating submodule configured to calculate a loss function of an initial super-resolution model by using the input image set and a restored image set corresponding to the input image set, and update parameters of the initial super-resolution model by adopting a gradient descent method;

a second updating sub-module configured to calculate a loss function of an initial recognition model based on distances between features of images in the input set of images and the restored set of images, the parameters of the initial recognition model being updated using a gradient descent method.

10. The apparatus of claim 9, wherein the first update submodule comprises:

a down-sampling unit configured to down-sample images in the input image set to obtain a down-sampled image set;

a restoring unit configured to restore the images in the downsampled image set by using the initial super-resolution model to obtain a restored image set;

a first calculation unit configured to calculate a reconstruction loss of the initial super-resolution model based on the set of input images and the set of restored images.

11. The apparatus of claim 10, wherein the second update submodule comprises:

a merging unit configured to merge the input image set, the downsampled image set, and the restored image set to obtain a target image set;

an extraction unit configured to extract features of images in the target image set;

a second calculation unit configured to calculate distances between features of images in the target image set;

a third calculation unit configured to calculate a binary loss function of the initial recognition model based on the distance.

12. The apparatus of claim 8, wherein the combining module comprises:

and the connecting sub-module is configured to connect the output end of the part before the loss function in the trained super-resolution model to the input end of the recognition model.

13. An image recognition apparatus comprising:

a second acquisition module configured to acquire an image to be recognized;

an output module configured to input the image to be recognized into an image recognition model, and output a recognition result corresponding to the image to be recognized, wherein the image recognition model is obtained by the image recognition model construction method according to any one of claims 1 to 6.

14. An electronic device, comprising:

at least one processor; and

a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,

the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7.

15. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-7.

16. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-7.

Technical Field

The present disclosure relates to the field of artificial intelligence technologies, and in particular, to the field of computer vision and deep learning technologies, and more particularly, to a method, an apparatus, a device, and a storage medium for constructing an image recognition model, which can be applied to scenes such as face recognition.

Background

Face recognition is one of the earliest and most widely-landed techniques in computer vision technology, and has been widely applied in the fields of security and mobile payment in particular. With the wide application of deep learning in the face recognition technology, the face recognition accuracy based on the deep learning is greatly improved.

However, in a more general unconstrained natural scene, after a camera collects a video stream, the captured face image often appears in the situations of poor quality, such as blurring and small face area, so that the recognition passing rate is not high or the false recognition rate is high.

Disclosure of Invention

The disclosure provides a method, a device, equipment and a storage medium for constructing an image recognition model.

According to a first aspect of the present disclosure, there is provided a method for constructing an image recognition model, including: acquiring an input image set; performing joint training on the initial super-resolution model and the initial recognition model by using the input image set to obtain a trained super-resolution model and a trained recognition model; and combining the trained super-resolution model and the recognition model in a cascading manner to obtain the image recognition model.

According to a second aspect of the present disclosure, there is provided an image recognition method including: acquiring an image to be identified; inputting an image to be recognized into an image recognition model, and outputting a recognition result corresponding to the image to be recognized, wherein the image recognition model is obtained by the method described in any one of the implementation manners of the first aspect.

According to a third aspect of the present disclosure, there is provided an apparatus for constructing an image recognition model, including: a first acquisition module configured to acquire a set of input images; the training module is configured to perform joint training on the initial super-resolution model and the initial recognition model by using the input image set to obtain a trained super-resolution model and a trained recognition model; and the combination module is configured to combine the trained super-resolution model and the recognition model in a cascading manner to obtain the image recognition model.

According to a fourth aspect of the present disclosure, there is provided an image recognition apparatus comprising: a second acquisition module configured to acquire an image to be recognized; and the output module is configured to input the image to be recognized into an image recognition model and output a recognition result corresponding to the image to be recognized, wherein the image recognition model is obtained by the method described in any one of the implementation manners of the first aspect.

According to a fifth aspect of the present disclosure, there is provided an electronic device comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method as described in any one of the implementations of the first aspect or the second aspect.

According to a sixth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to perform a method as described in any one of the implementation manners of the first or second aspect.

According to a seventh aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the method as described in any of the implementations of the first or second aspect.

It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.

Drawings

The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:

FIG. 1 is an exemplary system architecture diagram in which the present disclosure may be applied;

FIG. 2 is a flow diagram of one embodiment of a method of constructing an image recognition model according to the present disclosure;

FIG. 3 is a schematic diagram of an application scenario of a method of constructing an image recognition model according to the present disclosure;

FIG. 4 is a flow diagram of another embodiment of a method of constructing an image recognition model according to the present disclosure;

FIG. 5 is a flow diagram of yet another embodiment of a method of constructing an image recognition model according to the present disclosure;

FIG. 6 is a flow diagram for one embodiment of an image recognition method according to the present disclosure;

FIG. 7 is a schematic structural diagram of one embodiment of an apparatus for constructing an image recognition model according to the present disclosure;

FIG. 8 is a schematic block diagram of one embodiment of an image recognition device according to the present disclosure;

fig. 9 is a block diagram of an electronic device for implementing a method of constructing an image recognition model according to an embodiment of the present disclosure.

Detailed Description

Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.

It should be noted that, in the present disclosure, the embodiments and features of the embodiments may be combined with each other without conflict. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.

Fig. 1 illustrates an exemplary system architecture 100 to which an embodiment of an image recognition model construction method or an image recognition model construction apparatus of the present disclosure may be applied.

As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.

A user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or transmit information or the like. Various client applications may be installed on the terminal devices 101, 102, 103.

The terminal apparatuses 101, 102, and 103 may be hardware or software. When the terminal devices 101, 102, 103 are hardware, they may be various electronic devices including, but not limited to, smart phones, tablet computers, laptop portable computers, desktop computers, and the like. When the terminal apparatuses 101, 102, 103 are software, they can be installed in the above-described electronic apparatuses. It may be implemented as multiple pieces of software or software modules, or as a single piece of software or software module. And is not particularly limited herein.

The server 105 may provide various services. For example, the server 105 may analyze and process a set of input images acquired from the terminal devices 101, 102, 103 and generate a processing result (e.g., an image recognition model).

The server 105 may be hardware or software. When the server 105 is hardware, it may be implemented as a distributed server cluster composed of a plurality of servers, or may be implemented as a single server. When the server 105 is software, it may be implemented as multiple pieces of software or software modules (e.g., to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein.

It should be noted that the method for constructing the image recognition model provided by the embodiment of the present disclosure is generally executed by the server 105, and accordingly, the apparatus for constructing the image recognition model is generally disposed in the server 105.

It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.

With continued reference to FIG. 2, a flow 200 of one embodiment of a method of constructing an image recognition model according to the present disclosure is shown. The construction method of the image recognition model comprises the following steps:

step 201, an input image set is acquired.

In this embodiment, an executive (e.g., the server 105 shown in fig. 1) of the method for constructing the image recognition model may acquire a set of input images, where the set of input images may include at least one input image.

It should be noted that the input images in the input image set may be a plurality of images including faces, which are acquired in advance through various manners, for example, the input image set may be a plurality of images acquired from an existing photo library, and for example, the input image set may also be a plurality of images acquired in real time by an image sensor (such as a camera sensor) in an actual application scene, which is not limited in this disclosure.

And 202, performing combined training on the initial super-resolution model and the initial recognition model by using the input image set to obtain a trained super-resolution model and a trained recognition model.

In this embodiment, the executing entity may perform joint training on the initial super-resolution model and the initial recognition model by using the input image set obtained in step 201, so as to obtain a trained super-resolution model and a trained recognition model.

Here, the initial Super-Resolution model and the initial recognition model may be predetermined, for example, the initial Super-Resolution model may be SRCNN (Super-Resolution relational Network), FSRCNN (Fast Super-Resolution relational Network), SRGAN (Super-Resolution general adaptive Network), or the like; the initial recognition model may be an existing classification recognition model such as a ResNet (Residual Network) series, or may be a model designed according to actual needs.

The executing body may perform joint training on the initial super-resolution model and the initial recognition model by using the input image set obtained in step 201, so as to adjust parameters of the initial super-resolution model and the initial recognition model by using the input image set, and stop training when a joint training stop condition is met, thereby obtaining a trained super-resolution model and a trained recognition model. The joint training stopping condition may include the number of times of the preset training, or the value of the loss function does not decrease, or a certain accuracy threshold is set, and the training is stopped when the preset threshold is reached.

And step 203, combining the trained super-resolution model and the recognition model in a cascading manner to obtain an image recognition model.

In this embodiment, the executing agent may combine the trained super-resolution model obtained in step 202 and the recognition model in a cascade manner, so as to obtain the image recognition model. In the step, the trained super-resolution model is arranged before the recognition model, so that more information can be added to the recognition model, and a better effect is obtained.

The method for constructing the image recognition model provided by the embodiment of the disclosure comprises the steps of firstly obtaining an input image set; then, performing joint training on the initial super-resolution model and the initial recognition model by using the input image set to obtain a trained super-resolution model and a trained recognition model; and finally, combining the trained super-resolution model and the recognition model in a cascading manner to obtain an image recognition model. In the method for constructing the image recognition model in the embodiment, the initial super-resolution model and the initial recognition model are jointly trained, so that the influence of images with different resolutions on the classification task is relieved, the robustness of the image recognition model on low-quality data is improved, and the recognition accuracy of the image recognition model is further improved.

In the technical scheme of the disclosure, the acquisition, storage, application and the like of the personal information of the related user all accord with the regulations of related laws and regulations, and do not violate the good customs of the public order.

With continued reference to fig. 3, fig. 3 is a schematic diagram of an application scenario of the method of constructing an image recognition model according to the present disclosure. In the application scenario of fig. 3, first, the executing entity 301 obtains a set of input images 302. Then, the executive body 301 performs joint training on the initial super-resolution model and the initial recognition model by using the input image set 302, so as to obtain a trained super-resolution model 303 and a trained recognition model 304. Finally, the execution subject 301 combines the trained super-resolution model 303 and the recognition model 304 in a cascade manner to obtain an image recognition model 305.

With continued reference to FIG. 4, FIG. 4 illustrates a flow 400 of another embodiment of a method of constructing an image recognition model according to the present disclosure. The construction method of the image recognition model comprises the following steps:

step 401, an input image set is obtained.

Step 401 is substantially the same as step 201 in the foregoing embodiment, and the specific implementation manner may refer to the foregoing description of step 201, which is not described herein again.

And 402, calculating a loss function of the initial super-resolution model by using the input image set and a recovery image set corresponding to the input image set, and updating parameters of the initial super-resolution model by adopting a gradient descent method to obtain the trained super-resolution model.

In this embodiment, an executive agent (e.g., the server 105 shown in fig. 1) of the method for constructing the image recognition model may determine, after acquiring the input image set, a restored image corresponding to each image in the input image set, so as to obtain a restored image set corresponding to the input image set.

Then, the execution subject may calculate a loss function of the initial super-resolution model by using the input image in the input image set and the corresponding restored image in the restored image set, and perform iterative solution step by using a gradient descent method, thereby obtaining a minimized loss function and a model parameter value.

And finally, updating the parameters of the initial super-resolution model by using the obtained model parameter values so as to obtain the trained super-resolution model and improve the result quality.

And 403, calculating a loss function of the initial recognition model based on the distance between the features of the images in the input image set and the restored image set, and updating parameters of the initial recognition model by adopting a gradient descent method to obtain a trained recognition model.

In this embodiment, the executing entity may calculate the loss function of the initial recognition model based on the distances between the features of the images in the input image set and the restored image set, for example, the images in the input image set and the restored image set may be merged to obtain a final image set, then the distances between the image features in the obtained image set are calculated, and the loss function of the initial recognition model is calculated based on the distances.

And then, carrying out step-by-step iterative solution by adopting a gradient descent method to obtain a minimized loss function and model parameter values, and updating parameters of the initial recognition model by using the obtained model parameter values to obtain a trained recognition model, thereby improving the classification accuracy of the recognition model.

In some optional implementations of this embodiment, the gradient descent method is a random gradient descent method. By adopting the random gradient descent method, the minimized loss function and the model parameter value can be obtained more quickly, and the model training efficiency is improved.

And step 404, connecting the output end of the part before the loss function in the trained super-resolution model to the input end of the recognition model to obtain the image recognition model.

In this embodiment, the executing entity may connect an output end of a part before the loss function in the trained super-resolution model to an input end of the recognition model, so as to obtain the image recognition model. By arranging the trained super-resolution model before the recognition model, more information can be added to the recognition model, so that a better effect is obtained.

As can be seen from fig. 4, compared with the embodiment corresponding to fig. 2, the method for constructing the image recognition model in this embodiment highlights the step of training the initial super-resolution model and the initial recognition model by using the input image set, improves the efficiency of model training, and also improves the accuracy of the trained super-resolution model and the trained recognition model, and has a wider application range.

With continued reference to fig. 5, fig. 5 illustrates a flow 500 of yet another embodiment of a method of constructing an image recognition model according to the present disclosure. The construction method of the image recognition model comprises the following steps:

step 501, an input image set is obtained.

Step 501 is substantially the same as step 401 in the foregoing embodiment, and the specific implementation manner may refer to the foregoing description of step 401, which is not described herein again.

Step 502, down-sampling the images in the input image set to obtain a down-sampled image set.

In this embodiment, an executive (e.g., the server 105 shown in fig. 1) of the method for constructing the image recognition model may perform downsampling on each image in the input image set to obtain a corresponding downsampled image, and further obtain a downsampled image set including a downsampled image corresponding to each input image in the input image set. The down-sampled image obtained by the step is a low-quality image which is more suitable for the actual application scene.

And 503, restoring the images in the downsampled image set by using the initial super-resolution model to obtain a restored image set.

In this embodiment, the executing entity may recover each downsampled image in the downsampled image set by using the initial super-resolution model, so as to obtain a corresponding recovered image, where the recovered image is a high-quality image obtained by recovering the low-quality image obtained in step 502, and further obtain a recovered image set including a recovered image corresponding to each downsampled image in the downsampled image set.

And step 504, calculating the reconstruction loss of the initial super-resolution model based on the input image set and the recovery image set, and updating the parameters of the initial super-resolution model by adopting a gradient descent method to obtain the trained super-resolution model.

In this embodiment, the executing entity may calculate the reconstruction loss by using the input image in the input image set and the restored image corresponding to the input image in the restored image set, and perform iterative solution step by using a gradient descent method to obtain the minimized loss function and the model parameter value, and then update the parameter of the initial super-resolution model by using the obtained model parameter value to obtain the trained super-resolution model.

The result quality of the super-resolution model is improved through the steps.

And 505, merging the input image set, the downsampling image set and the restored image set to obtain a target image set.

In this embodiment, the execution subject may combine the input image set, the downsampled image set, and the restored image set to obtain the target image set.

Step 506, extracting the features of the images in the target image set, and calculating the distance between the features of the images in the target image set.

In the present embodiment, the execution subject described above may extract the feature of each image in the target image set, and calculate the distance between the images in the target image set based on the extracted feature.

Alternatively, before the input image set is obtained, the input images in the input image set may be labeled, and each target object is assigned an ID (Identity Document), where the target object is an object represented by a human face in the input image, and then the input image corresponding to each target object in the input image set should have the same ID, and the IDs of the downsampled image and the restored image correspond to the ID of the input image.

On this basis, in the present step, the distance between images can be calculated based on the ID, the distance between all images having the same ID can be calculated based on the extracted image features, and then the distance between images having different IDs can be calculated.

And 507, calculating a binary loss function of the initial recognition model based on the distance, and updating parameters of the initial recognition model by adopting a gradient descent method to obtain the trained recognition model.

In this embodiment, the executing agent may calculate the binary loss function of the initial recognition model based on the distance calculated in step 506.

Alternatively, when two images have the same ID, the loss function at this time is then the square of the distance between the two images. When the two images have different IDs, the margin between the two images is obtained first, and then max is obtained, so as to obtain the loss value at this time. I.e. closer distances between images of the same ID and further distances between all images of different IDs, thereby increasing inter-class differences and decreasing intra-class differences.

And then, carrying out step-by-step iterative solution by adopting a gradient descent method so as to obtain a minimized loss function and model parameter values, and then updating the parameters of the initial recognition model by using the obtained model parameter values so as to obtain the trained recognition model.

The classification accuracy of the recognition model is improved through the steps.

And step 508, connecting the output end of the part before the loss function in the trained super-resolution model to the input end of the recognition model to obtain the image recognition model.

Step 508 is substantially the same as step 404 in the foregoing embodiment, and the detailed implementation manner may refer to the foregoing description of step 404, which is not described herein again.

As can be seen from fig. 5, compared with the embodiment corresponding to fig. 4, in the method for constructing an image recognition model in this embodiment, the reconstruction loss of the initial super-resolution model and the binary loss function of the initial recognition model are calculated based on the input image set and the restored image set, and the parameters of the initial super-resolution model and the initial recognition model are updated by using the gradient descent method, so that the trained super-resolution model and the trained recognition model are obtained, and the result quality of the super-resolution model and the classification accuracy of the recognition model are improved.

With continued reference to fig. 6, fig. 6 illustrates a flow 600 of one embodiment of an image recognition method according to the present disclosure. The image recognition method comprises the following steps:

step 601, acquiring an image to be identified.

In this embodiment, an executing subject of the image recognition method (for example, the server 105 shown in fig. 1) may acquire an image to be recognized, where the image to be recognized may be an image including a human face, which is acquired by a camera sensor in a human face recognition practical application scenario.

Step 602, inputting the image to be recognized into the image recognition model, and outputting a recognition result corresponding to the image to be recognized.

In this embodiment, the executing body may input the image to be recognized into an image recognition model, and output a recognition result corresponding to the image to be recognized, where the image recognition model may be obtained by the method for constructing the image recognition model in the foregoing embodiment.

After the execution main body inputs the image to be recognized into the image recognition model, the image recognition model can restore the image to be recognized firstly to obtain a corresponding restored image; and then extracting the features of the image to be recognized and the restored image, classifying the images based on the features to obtain a corresponding recognition result, and outputting the recognition result.

The image identification method provided by the embodiment of the disclosure comprises the steps of firstly obtaining an image to be identified; and then, inputting the image to be recognized into the image recognition model, and outputting a recognition result corresponding to the image to be recognized. The image recognition method in the embodiment recognizes the image to be recognized through the pre-trained image recognition model, and improves the accuracy of the recognition result.

With further reference to fig. 7, as an implementation of the method shown in the above figures, the present disclosure provides an embodiment of an apparatus for constructing an image recognition model, where the embodiment of the apparatus corresponds to the embodiment of the method shown in fig. 2, and the apparatus may be applied to various electronic devices.

As shown in fig. 7, the image recognition model construction apparatus 700 of the present embodiment includes: a first acquisition module 701, a training module 702, and a combining module 703. Wherein, the first obtaining module 701 is configured to obtain a set of input images; a training module 702 configured to perform joint training on the initial super-resolution model and the initial recognition model by using the input image set, so as to obtain a trained super-resolution model and a trained recognition model; and the combining module 703 is configured to combine the trained super-resolution model and the recognition model in a cascading manner to obtain an image recognition model.

In the present embodiment, the image recognition model construction apparatus 700 includes: the specific processing of the first obtaining module 701, the training module 702 and the combining module 703 and the technical effects thereof can refer to the related descriptions of step 201 and step 203 in the corresponding embodiment of fig. 2, which are not repeated herein.

In some optional implementations of this embodiment, the training module includes: the first updating submodule is configured to calculate a loss function of the initial super-resolution model by using the input image set and a recovery image set corresponding to the input image set, and update parameters of the initial super-resolution model by adopting a gradient descent method; a second updating sub-module configured to calculate a loss function of the initial recognition model based on distances between features of the images in the input image set and the restored image set, the parameters of the initial recognition model being updated using a gradient descent method.

In some optional implementations of this embodiment, the first update sub-module includes: the down-sampling unit is configured to down-sample the images in the input image set to obtain a down-sampled image set; the recovery unit is configured to recover the images in the downsampled image set by using the initial super-resolution model to obtain a recovered image set; a first calculation unit configured to calculate a reconstruction loss of the initial super-resolution model based on the set of input images and the set of restored images.

In some optional implementations of this embodiment, the second update submodule includes: the merging unit is configured to merge the input image set, the downsampling image set and the recovery image set to obtain a target image set; an extraction unit configured to extract features of images in a target image set; a second calculation unit configured to calculate distances between features of the images in the target image set; a third calculation unit configured to calculate a binary loss function of the initial recognition model based on the distance.

In some optional implementations of this embodiment, the combining module includes: and the connecting sub-module is configured to connect the output end of the part before the loss function in the trained super-resolution model to the input end of the recognition model.

With further reference to fig. 8, as an implementation of the methods shown in the above figures, the present disclosure provides an embodiment of an image recognition apparatus, which corresponds to the method embodiment shown in fig. 6, and which is particularly applicable to various electronic devices.

As shown in fig. 8, the image recognition apparatus 800 of the present embodiment includes: a second acquisition module 801 and an output module 802. The second obtaining module 801 is configured to obtain an image to be identified; and an output module 802 configured to input the image to be recognized into the image recognition model and output a recognition result corresponding to the image to be recognized.

In the present embodiment, in the image recognition apparatus 800: the specific processing of the second obtaining module 801 and the output module 802 and the technical effects thereof can refer to the related description of step 601 and step 602 in the corresponding embodiment of fig. 6, which is not repeated herein.

The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.

FIG. 9 illustrates a schematic block diagram of an example electronic device 900 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.

As shown in fig. 9, the apparatus 900 includes a computing unit 901, which can perform various appropriate actions and processes in accordance with a computer program stored in a Read Only Memory (ROM)902 or a computer program loaded from a storage unit 908 into a Random Access Memory (RAM) 903. In the RAM 903, various programs and data required for the operation of the device 900 can also be stored. The calculation unit 901, ROM 902, and RAM 903 are connected to each other via a bus 904. An input/output (I/O) interface 905 is also connected to bus 904.

A number of components in the device 900 are connected to the I/O interface 905, including: an input unit 906 such as a keyboard, a mouse, and the like; an output unit 907 such as various types of displays, speakers, and the like; a storage unit 908 such as a magnetic disk, optical disk, or the like; and a communication unit 909 such as a network card, a modem, a wireless communication transceiver, and the like. The communication unit 909 allows the device 900 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.

The computing unit 901 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of the computing unit 901 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 901 performs the respective methods and processes described above, such as the construction method of the image recognition model or the image recognition method. For example, in some embodiments, the image recognition model construction method or the image recognition method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 908. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 900 via ROM 902 and/or communications unit 909. When the computer program is loaded into the RAM 903 and executed by the computing unit 901, one or more steps of the image recognition method or the method of constructing an image recognition model described above may be performed. Alternatively, in other embodiments, the computing unit 901 may be configured to perform the construction method of the image recognition model or the image recognition method by any other suitable means (e.g., by means of firmware).

Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.

Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.

In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.

To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.

The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.

The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.

It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.

The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

18页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:有源像素传感器、显示面板、电子设备和驱动控制方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!