Biological weight recognition method and device, electronic equipment and storage medium

文档序号:1875546 发布日期:2021-11-23 浏览:18次 中文

阅读说明:本技术 生物体重识别方法、装置、电子设备及存储介质 (Biological weight recognition method and device, electronic equipment and storage medium ) 是由 陈海波 罗志鹏 姚粤汉 于 2021-09-07 设计创作,主要内容包括:本申请提供了一种生物体重识别方法,所述方法包括:获取查询图像;将所述查询图像输入生物体检测模型,得到所述查询图像对应的预测检测信息;利用所述查询图像及其对应的预测检测信息和生物体重识别模型,获取所述查询图像分别和检索图像集中各所述检索图像对应的预测重识别信息。该方法基于查询图像,得到各查询图像对应的预测检测信息,基于查询图像及其对应的预测检测信息和生物体重识别模型,得到查询图像分别和检索图像集中各检索图像对应的预测重识别信息,所得到查询图像和检索图像对应的预测重识别信息用于指示查询图像和检索图像是否包含同一生物体,重识别效率高,识别结果准确度高。(The application provides a biological weight recognition method, which comprises the following steps: acquiring a query image; inputting the query image into a biological detection model to obtain prediction detection information corresponding to the query image; and acquiring the predicted re-identification information corresponding to the query image and each retrieval image in the retrieval image set by using the query image, the predicted detection information corresponding to the query image and the biological weight identification model. The method comprises the steps of obtaining prediction detection information corresponding to each query image based on the query image, obtaining prediction re-identification information corresponding to each retrieval image in a retrieval image set of the query image and the retrieval image based on the query image, the prediction detection information corresponding to the query image and the biological weight identification model, and indicating whether the query image and the retrieval image contain the same organism or not by the obtained prediction re-identification information corresponding to the query image and the retrieval image, wherein the re-identification efficiency is high, and the identification result accuracy is high.)

1. A method for identifying a weight of an organism, the method comprising:

acquiring a query image;

inputting the query image into a biological detection model to obtain prediction detection information corresponding to the query image;

and acquiring the predicted re-identification information corresponding to the query image and each retrieval image in the retrieval image set by using the query image, the corresponding predicted detection information and the biological weight identification model thereof, wherein the predicted re-identification information corresponding to the query image and the retrieval image is used for indicating whether the query image and the retrieval image contain the same organism or not.

2. The biological weight recognition method according to claim 1, further comprising:

constructing a preset detection network by utilizing an SE-ResNext50 network, an RPN network and a first header structure to a third header structure;

acquiring a training data set, wherein each training data in the training data set comprises a training image and label detection information corresponding to the training image;

inputting the training images into the SE-ResNext50 network aiming at each training image in the training data set to obtain characteristic information corresponding to the training images;

inputting the training image into the RPN to obtain ROI information corresponding to the training image;

inputting the characteristic information and ROI information corresponding to the training image into the first head structure to obtain first classification information and first regression information corresponding to the training image;

inputting the feature information and the first regression information corresponding to the training image into a second head structure to obtain second classification information and second regression information corresponding to the training image;

inputting the feature information and the second regression information corresponding to the training image into the third head structure to obtain third classification information and third regression information corresponding to the training image as prediction detection information corresponding to the training image;

training the preset detection network by using the prediction detection information and the label detection information corresponding to the training image to obtain the organism detection model;

the first head structure to the third head structure are the same in structure, each head structure comprises an ROI Align layer, a classification branch and a regression branch, each classification branch comprises two full-connection layers, and each regression branch comprises two convolution layers and one full-connection layer.

3. The biological weight recognition method according to claim 2, wherein the SE-resent 50 network comprises a second stage module to a fifth stage module, and the inputting the training image into the SE-resent 50 network to obtain the feature information corresponding to the training image comprises:

inputting the training image into the second stage module to obtain a second feature map corresponding to the training image;

inputting the second feature map corresponding to the training image into a third stage module to obtain a third feature map corresponding to the training image;

inputting the third feature map corresponding to the training image into a fourth stage module to obtain a fourth feature map corresponding to the training image;

inputting the fourth feature map corresponding to the training image into the fifth stage module to obtain a fifth feature map corresponding to the training image;

and constructing a feature pyramid by using the second feature map to the fifth feature map corresponding to the training image to obtain a plurality of feature maps which are arranged according to the feature map size sequence and correspond to the training image and serve as feature information corresponding to the training image.

4. The method according to claim 2, wherein each training data set further includes labeled re-identification information corresponding to the training image and each retrieval image in the retrieval image set, and the labeled re-identification information corresponding to the training image and the retrieval image is used to indicate whether the training image and the retrieval image contain the same organism, the method further comprising:

constructing a preset re-identification network by using a ResNet50 network, a pooling layer and a BNNeck network;

obtaining a graph residual error characteristic corresponding to the training image by using the training image and the corresponding prediction detection information thereof and the ResNet50 network;

inputting the graph residual error characteristics corresponding to the training images into the pooling layer to obtain pooling characteristics corresponding to the training images;

inputting the pooling features corresponding to the training images into the BNNeck network to obtain the normalization features corresponding to the training images;

acquiring prediction re-identification information corresponding to the training images and the retrieval images in the retrieval image set respectively by utilizing the normalization characteristics corresponding to the training images;

and training the preset re-recognition network by using the predicted re-recognition information corresponding to the training images and the retrieval images in the retrieval image set respectively, and the labeled re-recognition information corresponding to the training images and the retrieval images in the retrieval image set respectively to obtain the biological weight recognition model.

5. The biological weight recognition method according to claim 4, further comprising:

respectively performing data augmentation on each training image by using the prediction detection information corresponding to each training image to obtain augmented images corresponding to each training image and storing the augmented images to the training data set as new training images;

reordering all the retrieval images in the retrieval image set based on the augmentation images corresponding to the training images to obtain ordering information corresponding to the retrieval images in the retrieval image set;

the obtaining of the prediction re-identification information corresponding to the training images and the retrieval images in the retrieval image set by using the normalization features corresponding to the training images includes:

and acquiring prediction re-identification information corresponding to the training images and the retrieval images in the retrieval image set respectively based on the ranking information corresponding to the retrieval images in the retrieval image set by using the normalization features corresponding to the training images.

6. The biological weight recognition method according to claim 4, wherein the BNNeck network comprises a normalization layer and a full connection layer, and the inputting the pooled features corresponding to the training images into the BNNeck network to obtain the normalized features corresponding to the training images comprises:

inputting the pooling features corresponding to the training images into the normalization layer to obtain the normalization features corresponding to the training images, wherein the pooling features corresponding to the training images are used for calculating a first loss value and a second loss value corresponding to the training images;

inputting the normalized features corresponding to the training images into the full-link layer to obtain full-link features corresponding to the training images, wherein the full-link features corresponding to the training images are used for calculating third loss values corresponding to the training images;

and the first loss value, the second loss value and the third loss value corresponding to each training image are used for training the preset re-recognition network.

7. The biological weight recognition method according to claim 6, wherein the fully connected features corresponding to the training images are further used for calculating label smoothness values corresponding to the training images;

and the first loss value, the second loss value, the third loss value and the label smoothness value corresponding to each training image are used for training the preset re-recognition network.

8. The method according to claim 4, wherein the training the pre-set re-recognition network by using the predicted re-recognition information corresponding to the training images and the labeled re-recognition information corresponding to the training images and the retrieval images in the retrieval image set to obtain the biological weight recognition model comprises:

training the preset re-recognition network by using a preset learning rate strategy by using the prediction re-recognition information corresponding to the training images and the retrieval images in the retrieval image set respectively and the labeling re-recognition information corresponding to the training images and the retrieval images in the retrieval image set respectively to obtain the biological weight recognition model;

wherein the preset learning rate strategy is used for indicating the learning rate corresponding to each epoch,and the first to the Nth epochs1Gradually increasing the learning rate corresponding to the epochs to a first preset learning rate Nth1+1 to Nth2The learning rate corresponding to the epochs is a second preset learning rate, Nth2+1 to Nth3The learning rate corresponding to the epoch is the third predetermined learning rate, Nth3+1 to Nth4The learning rate corresponding to the epochs is a fourth preset learning rate, the first preset learning rate is smaller than the second preset learning rate, the second preset learning rate is larger than the third preset learning rate, and the third preset learning rate is larger than the fourth preset learning rate.

9. The biological weight recognition method according to claim 1, further comprising:

acquiring target category information;

determining at least one target retrieval image from the set of retrieval images based on the target category information;

detecting whether a target organism corresponding to the target category information exists in the query image or not based on the prediction re-identification information corresponding to the query image and each target retrieval image;

and when the target organism exists in the query image, acquiring the current state information and/or the spatial distribution trend information of the target organism by using the query image.

10. An apparatus for recognizing a weight of a living body, comprising:

the query image module is used for acquiring a query image;

the prediction detection module is used for inputting the query image into a biological detection model to obtain prediction detection information corresponding to the query image;

and the image re-identification module is used for acquiring the prediction re-identification information corresponding to the query image and each retrieval image in the retrieval image set by utilizing the query image, the prediction detection information corresponding to the query image and the biological weight identification model, wherein the prediction re-identification information corresponding to the query image and the retrieval image is used for indicating whether the query image and the retrieval image contain the same organism or not.

11. An electronic device, characterized in that the electronic device comprises a memory and a processor, the memory stores a computer program, and the processor implements the steps of the biological weight recognition method according to any one of claims 1 to 9 when executing the computer program.

12. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by a processor, implements the steps of the biological weight recognition method according to any one of claims 1 to 9.

Technical Field

The present application relates to the field of deep learning technologies, and in particular, to a method and an apparatus for identifying a weight of a living body, an electronic device, and a storage medium.

Background

Endangered species are especially important indicators of biodiversity and environmental health, and protection of wild life is crucial for maintaining a healthy and balanced ecosystem and for ensuring the sustained biodiversity of our world. Computer vision techniques can collect a large amount of image data from camera traps and even drones and use this image to construct a system from edge to cloud and can be applied to intelligent imaging sensors to capture images/video related to and monitor wildlife.

Chinese patent CN208282866U discloses a health detection robot for animal husbandry, in which a moving mechanism moves linearly on a track in a farm, a route planning device provides a detection travel route for manually setting the moving mechanism, a radio frequency positioning device records the real-time position of the detection robot, an ear tag reading device reads animal individual identification information stored in an animal ear tag and transmits the information to an embedded arithmetic device, and the information of inspection point marks, ear tags, body temperature, images, animal activity, environment temperature and humidity, air concentration and air odor obtained by detection are summarized and transmitted to an external control host through a wireless network interface to be displayed, so as to comprehensively apply automation technology and robot technology to automatically detect animal health in the field of animal husbandry application, the technical aims of improving the working efficiency and reducing the labor intensity of workers are fulfilled.

However, the above prior art only relies on information of one part of the animal body for detection, and obviously has low accuracy.

Disclosure of Invention

The application aims to provide a biological weight recognition method, a biological weight recognition device, an electronic device and a storage medium, and solves the problem that in the prior art, the accuracy is low when a biological body is subjected to weight recognition.

The purpose of the application is realized by adopting the following technical scheme:

in a first aspect, the present application provides a method for identifying a weight of an organism, the method comprising: acquiring a query image; inputting the query image into a biological detection model to obtain prediction detection information corresponding to the query image; and acquiring the predicted re-identification information corresponding to the query image and each retrieval image in the retrieval image set by using the query image, the corresponding predicted detection information and the biological weight identification model thereof, wherein the predicted re-identification information corresponding to the query image and the retrieval image is used for indicating whether the query image and the retrieval image contain the same organism or not.

The technical scheme has the advantages that the prediction detection information corresponding to each query image is obtained based on the query image, the prediction re-identification information corresponding to each retrieval image in the query image set and the retrieval image set is obtained based on the query image, the prediction detection information corresponding to the query image and the biological weight identification model, the prediction re-identification information corresponding to the query image and the retrieval image is used for indicating whether the query image and the retrieval image contain the same biological body, the re-identification efficiency is high, and the identification result accuracy is high.

In some optional embodiments, the method further comprises: constructing a preset detection network by utilizing an SE-ResNext50 network, an RPN network and a first header structure to a third header structure; acquiring a training data set, wherein each training data in the training data set comprises a training image and label detection information corresponding to the training image; inputting the training images into the SE-ResNext50 network aiming at each training image in the training data set to obtain characteristic information corresponding to the training images; inputting the training image into the RPN to obtain ROI information corresponding to the training image; inputting the characteristic information and ROI information corresponding to the training image into the first head structure to obtain first classification information and first regression information corresponding to the training image; inputting the feature information and the first regression information corresponding to the training image into a second head structure to obtain second classification information and second regression information corresponding to the training image; inputting the feature information and the second regression information corresponding to the training image into the third head structure to obtain third classification information and third regression information corresponding to the training image as prediction detection information corresponding to the training image; training the preset detection network by using the prediction detection information and the label detection information corresponding to the training image to obtain the organism detection model; the first head structure to the third head structure are the same in structure, each head structure comprises an ROI Align layer, a classification branch and a regression branch, each classification branch comprises two full-connection layers, and each regression branch comprises two convolution layers and one full-connection layer.

The technical scheme has the advantages that a preset detection network is constructed by utilizing an SE-ResNext50 network and combining an RPN network and a first head structure to a third head structure, so that the network can be deeper and deeper, the problems of gradient dispersion and precision reduction in a deep network are solved, the speed is controlled, and the precision is also controlled; acquiring a training data set, obtaining feature information corresponding to a training image based on each training image in the training data set, obtaining ROI information corresponding to the training image based on the feature information corresponding to the training image, obtaining first classification information and first regression information corresponding to the training image based on the feature information and the ROI information corresponding to the training image, obtaining second classification information and second regression information corresponding to the information image based on the feature information and the first regression information corresponding to the training image, obtaining third classification information and third regression information corresponding to the training image as pre-detection information corresponding to the training image based on the feature information and the second regression information corresponding to the training image, training a preset detection network based on the pre-detection information and the labeled detection information corresponding to the training image to obtain a biological detection model, wherein the obtained biological detection model has high accuracy, the detection precision is further improved by using the organism detection model to carry out organism detection; the first head structure to the third head structure are the same in structure, so that the detection speed is higher.

In some optional embodiments, the SE-ResNext50 network includes second to fifth stage modules, and the inputting the training image into the SE-ResNext50 network to obtain feature information corresponding to the training image includes: inputting the training image into the second stage module to obtain a second feature map corresponding to the training image; inputting the second feature map corresponding to the training image into a third stage module to obtain a third feature map corresponding to the training image; inputting the third feature map corresponding to the training image into a fourth stage module to obtain a fourth feature map corresponding to the training image; inputting the fourth feature map corresponding to the training image into the fifth stage module to obtain a fifth feature map corresponding to the training image;

and constructing a feature pyramid by using the second feature map to the fifth feature map corresponding to the training image to obtain a plurality of feature maps which are arranged according to the feature map size sequence and correspond to the training image and serve as feature information corresponding to the training image. The technical scheme has the beneficial effects that the characteristic pyramid is constructed, so that the speed of acquiring the characteristic information is higher, and the acquired characteristic information is more accurate.

In some optional embodiments, each training data in the training data set further includes labeled re-identification information corresponding to the training image and each retrieval image in the retrieval image set, where the labeled re-identification information corresponding to the training image and the retrieval image is used to indicate whether the training image and the retrieval image contain the same living body, and the method further includes: constructing a preset re-identification network by using a ResNet50 network, a pooling layer and a BNNeck network; obtaining a graph residual error characteristic corresponding to the training image by using the training image and the corresponding prediction detection information thereof and the ResNet50 network; inputting the graph residual error characteristics corresponding to the training images into the pooling layer to obtain pooling characteristics corresponding to the training images; inputting the pooling features corresponding to the training images into the BNNeck network to obtain the normalization features corresponding to the training images; acquiring prediction re-identification information corresponding to the training images and the retrieval images in the retrieval image set respectively by utilizing the normalization characteristics corresponding to the training images; and training the preset re-recognition network by using the predicted re-recognition information corresponding to the training images and the retrieval images in the retrieval image set respectively, and the labeled re-recognition information corresponding to the training images and the retrieval images in the retrieval image set respectively to obtain the biological weight recognition model.

The technical scheme has the advantages that a preset re-recognition network is constructed based on a ResNet50 network, a pooling layer and a BNNeck network, a graph residual error characteristic corresponding to a training image is obtained based on the training image and corresponding pre-detection information thereof and a ResNet50 network, a pooling characteristic corresponding to the training image is obtained based on the graph residual error characteristic corresponding to the training image, a normalization characteristic corresponding to the training image is obtained based on the pooling characteristic corresponding to the training image, prediction re-recognition information corresponding to the training image and each retrieval image in the retrieval image set is obtained based on the normalization characteristic corresponding to the training image, the preset re-recognition network is trained based on the prediction re-recognition information corresponding to the training image and each retrieval image in the retrieval image set, the training image and labeling re-recognition information corresponding to each retrieval image in the retrieval image set, so as to obtain the biological weight recognition model, the calculation process of the re-recognition model obtained by the method is faster, the result is easier to converge, and the accuracy is higher when the biological body weight recognition model obtained by the method is used for carrying out biological body recognition.

In some optional embodiments, the method further comprises: respectively performing data augmentation on each training image by using the prediction detection information corresponding to each training image to obtain augmented images corresponding to each training image and storing the augmented images to the training data set as new training images; reordering all the retrieval images in the retrieval image set based on the augmentation images corresponding to the training images to obtain ordering information corresponding to the retrieval images in the retrieval image set; the obtaining of the prediction re-identification information corresponding to the training images and the retrieval images in the retrieval image set by using the normalization features corresponding to the training images includes: and acquiring prediction re-identification information corresponding to the training images and the retrieval images in the retrieval image set respectively based on the ranking information corresponding to the retrieval images in the retrieval image set by using the normalization features corresponding to the training images.

The technical scheme has the advantages that the data of the training image is augmented, shielding with different degrees is added to the image, the risk of model overfitting can be reduced, and meanwhile certain robustness is achieved on shielding; and reordering all the retrieval images in the retrieval image set so as to improve the identification effect.

In some optional embodiments, the bntack network includes a normalization layer and a full-connectivity layer, and the inputting the pooled features corresponding to the training image into the bntack network to obtain the normalized features corresponding to the training image includes: inputting the pooling features corresponding to the training images into the normalization layer to obtain the normalization features corresponding to the training images, wherein the pooling features corresponding to the training images are used for calculating a first loss value and a second loss value corresponding to the training images; inputting the normalized features corresponding to the training images into the full-link layer to obtain full-link features corresponding to the training images, wherein the full-link features corresponding to the training images are used for calculating third loss values corresponding to the training images; and the first loss value, the second loss value and the third loss value corresponding to each training image are used for training the preset re-recognition network. The technical scheme has the advantages that the pooling features corresponding to the training images are input into the BNNeck network to obtain the normalization features corresponding to the training images, and the clustering performance of the training images is improved, so that the accuracy of the preset re-recognition network obtained based on the training images is higher.

In some optional embodiments, the fully connected features corresponding to the training images are further used for calculating label smoothness values corresponding to the training images; and the first loss value, the second loss value, the third loss value and the label smoothness value corresponding to each training image are used for training the preset re-recognition network. The technical scheme has the beneficial effects that the added label smooth training strategy is beneficial to increasing the generalization capability of the re-recognition training model, preventing over-fitting and keeping the deep stability of the re-recognition model.

In some alternative embodiments, said utilizing said training image and said set of retrieval images respectivelyTraining the preset re-recognition network by using the prediction re-recognition information corresponding to each retrieval image, the training image and the labeling re-recognition information corresponding to each retrieval image in the retrieval image set to obtain the biological weight recognition model, wherein the training comprises: training the preset re-recognition network by using a preset learning rate strategy by using the prediction re-recognition information corresponding to the training images and the retrieval images in the retrieval image set respectively and the labeling re-recognition information corresponding to the training images and the retrieval images in the retrieval image set respectively to obtain the biological weight recognition model; wherein the preset learning rate strategy is used for indicating the learning rate corresponding to each epoch, and the first to Nth epochs1Gradually increasing the learning rate corresponding to the epochs to a first preset learning rate Nth1+1 to Nth2The learning rate corresponding to the epochs is a second preset learning rate, Nth2+1 to Nth3The learning rate corresponding to the epoch is the third predetermined learning rate, Nth3+1 to Nth4The learning rate corresponding to the epochs is a fourth preset learning rate, the first preset learning rate is smaller than the second preset learning rate, the second preset learning rate is larger than the third preset learning rate, and the third preset learning rate is larger than the fourth preset learning rate. The technical scheme has the advantages that the preset re-recognition network is trained by using the preset learning rate strategy, so that the phenomenon of over-fitting in advance in the initial stage when the biological weight recognition model is obtained by slowing down training is facilitated, and the deep stability of the biological weight recognition model is kept.

In some optional embodiments, the method further comprises: acquiring target category information; determining at least one target retrieval image from the set of retrieval images based on the target category information; detecting whether a target organism corresponding to the target category information exists in the query image or not based on the prediction re-identification information corresponding to the query image and each target retrieval image; and when the target organism exists in the query image, acquiring the current state information and/or the spatial distribution trend information of the target organism by using the query image. The technical scheme has the advantages that at least one target retrieval image is determined based on the target category information, whether a target organism corresponding to the target category information exists in the query image is detected based on the query image and the prediction re-identification information corresponding to each target retrieval image, when the target organism exists in the query image, the current state information and/or the spatial distribution trend information of the target organism are/is acquired by using the query image, and the efficiency and the accuracy of organism identification are improved.

In a second aspect, the present application provides a biological weight recognition device, the device comprising: the query image module is used for acquiring a query image; the prediction detection module is used for inputting the query image into a biological detection model to obtain prediction detection information corresponding to the query image; and the image re-identification module is used for acquiring the prediction re-identification information corresponding to the query image and each retrieval image in the retrieval image set by utilizing the query image, the prediction detection information corresponding to the query image and the biological weight identification model, wherein the prediction re-identification information corresponding to the query image and the retrieval image is used for indicating whether the query image and the retrieval image contain the same organism or not.

In some optional embodiments, the biological weight recognition device further includes: the preset detection module is used for constructing a preset detection network by utilizing an SE-ResNext50 network, an RPN network and the first head structure to the third head structure; the training data module is used for acquiring a training data set, wherein each training data in the training data set comprises a training image and label detection information corresponding to the training image; a feature information module, configured to, for each training image in the training data set, input the training image into the SE-resenxt 50 network to obtain feature information corresponding to the training image; the ROI information module is used for inputting the training image into the RPN to obtain ROI information corresponding to the training image; the first classification module is used for inputting the characteristic information and the ROI information corresponding to the training image into the first head structure to obtain first classification information and first regression information corresponding to the training image; the second classification module is used for inputting the characteristic information and the first regression information corresponding to the training image into a second structure to obtain second classification information and second regression information corresponding to the training image; the detection information module is used for inputting the feature information and the second regression information corresponding to the training image into the third head structure to obtain third classification information and third regression information corresponding to the training image as prediction detection information corresponding to the training image; and the detection model module is used for training the preset detection network by utilizing the prediction detection information and the label detection information corresponding to the training image to obtain the organism detection model.

In some optional embodiments, the SE-ResNext50 network comprises second to fifth stage modules, the feature information module comprising: the second feature map unit is used for inputting the training image into the second stage module to obtain a second feature map corresponding to the training image; a third feature map unit, configured to input the second feature map corresponding to the training image into a third stage module, so as to obtain a third feature map corresponding to the training image; a fourth feature map unit, configured to input the third feature map corresponding to the training image into a fourth stage module to obtain a fourth feature map corresponding to the training image; a fifth feature map unit, configured to input a fourth feature map corresponding to the training image into the fifth stage module, so as to obtain a fifth feature map corresponding to the training image; and the feature information unit is used for constructing a feature pyramid by using the second feature map to the fifth feature map corresponding to the training image to obtain a plurality of feature maps which are arranged according to the feature map size sequence and correspond to the training image and serve as feature information corresponding to the training image.

In some optional embodiments, each training data in the training data set further includes labeled re-identification information corresponding to the training image and each retrieval image in the retrieval image set, where the labeled re-identification information corresponding to the training image and the retrieval image is used to indicate whether the training image and the retrieval image contain the same organism, and the biological weight recognition apparatus further includes: the preset network module is used for constructing a preset re-identification network by utilizing a ResNet50 network, a pooling layer and a BNNeck network; the graph residual error characteristic module is used for acquiring the graph residual error characteristic corresponding to the training image by utilizing the training image and the corresponding prediction detection information thereof and the ResNet50 network; the pooling characteristic module is used for inputting the graph residual error characteristics corresponding to the training images into the pooling layer to obtain pooling characteristics corresponding to the training images; the normalized feature module is used for inputting the pooled features corresponding to the training images into the BNNeck network to obtain the normalized features corresponding to the training images; the prediction re-identification module is used for acquiring prediction re-identification information corresponding to the training images and the retrieval images in the retrieval image set respectively by utilizing the normalization features corresponding to the training images; and the re-recognition model module is used for training the preset re-recognition network by utilizing the predicted re-recognition information corresponding to the training images and the retrieval images in the retrieval image set respectively, and the labeled re-recognition information corresponding to the training images and the retrieval images in the retrieval image set respectively to obtain the biological weight recognition model.

In some optional embodiments, the biological weight recognition device further includes: the augmented image module is used for performing data augmentation on each training image by using the prediction detection information corresponding to each training image to obtain the augmented image corresponding to each training image and storing the augmented image into the training data set to serve as a new training image; the ranking information module is used for reordering all the retrieval images in the retrieval image set based on the augmentation images corresponding to the training images to obtain ranking information corresponding to the retrieval images in the retrieval image set; a prediction re-identification module, configured to obtain, by using the normalized features corresponding to the training images, prediction re-identification information corresponding to the training images and the retrieval images in the retrieval image set, where the prediction re-identification module includes: and the prediction re-identification unit is used for acquiring the prediction re-identification information corresponding to the training images and the retrieval images in the retrieval image set respectively based on the ranking information corresponding to the retrieval images in the retrieval image set by utilizing the normalization features corresponding to the training images.

In some optional embodiments, the bnnack network comprises a normalization layer and a full connectivity layer, the normalization feature module comprising: the normalized feature unit is used for inputting the pooled features corresponding to the training images into the normalization layer to obtain normalized features corresponding to the training images, and the pooled features corresponding to the training images are used for calculating a first loss value and a second loss value corresponding to the training images; a full-connection feature unit, configured to input the normalized feature corresponding to the training image into the full-connection layer, so as to obtain a full-connection feature corresponding to the training image, where the full-connection feature corresponding to the training image is used to calculate a third loss value corresponding to the training image; and the first loss value, the second loss value and the third loss value corresponding to each training image are used for training the preset re-recognition network.

In some optional embodiments, the fully connected feature unit is further configured to calculate a label smoothness value corresponding to the training image; and the first loss value, the second loss value, the third loss value and the label smoothness value corresponding to each training image are used for training the preset re-recognition network.

In some optional embodiments, the re-recognition model module comprises: re-identifying the model unit; wherein the preset learning rate strategy is used for indicating the learning rate corresponding to each epoch, and the first to Nth epochs1Gradually increasing the learning rate corresponding to the epochs to a first preset learning rate Nth1+1 to Nth2The learning rate corresponding to the epochs is a second preset learning rate, Nth2+1 to Nth3The learning rate corresponding to the epoch is the third predetermined learning rate, Nth3+1 to Nth4The learning rate corresponding to the epochs is a fourth preset learning rate, the first preset learning rate is smaller than the second preset learning rate, the second preset learning rate is larger than the third preset learning rate, and the third preset learning rate is larger than the fourth preset learning rate.

In some optional embodiments, the biological weight recognition device further includes: the target category module is used for acquiring target category information; a target retrieval module for determining at least one target retrieval image from the retrieval image set based on the target category information; the query detection module is used for detecting whether a target organism corresponding to the target category information exists in the query image or not based on the query image and the prediction re-identification information corresponding to each target retrieval image; and the state distribution module is used for acquiring the current state information and/or the spatial distribution trend information of the target organism by utilizing the query image when the target organism exists in the query image.

In a third aspect, the present application provides an electronic device comprising a memory and a processor, the memory storing a computer program, the processor implementing the steps of any of the above methods when executing the computer program.

In a fourth aspect, the present application provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, carries out the steps of any of the methods described above.

The foregoing description is only an overview of the technical solutions of the present application, and in order to make the technical solutions of the present application more clear and clear, and to implement the technical solutions according to the content of the description, the following description is made with reference to the preferred embodiments of the present application and the detailed drawings.

Drawings

The present application is further described below with reference to the drawings and examples.

Fig. 1 is a schematic flow chart of a biological weight recognition method according to an embodiment of the present application;

FIG. 2 is a schematic diagram illustrating a method for identifying a weight of a living being according to an embodiment of the present disclosure;

fig. 3 is a partial schematic flow chart of another biological weight recognition method provided in the embodiments of the present application;

FIG. 4 is a schematic structural diagram of an SE-ResNext50 module according to an embodiment of the present disclosure;

fig. 5 is a schematic flowchart of obtaining feature information corresponding to a training image according to an embodiment of the present disclosure;

fig. 6 is a partial schematic flow chart of another biological weight recognition method provided in the embodiments of the present application;

fig. 7 is a partial schematic flow chart of another biological weight recognition method provided in the embodiments of the present application;

fig. 8 is a schematic flowchart of a process for obtaining predicted re-identification information according to an embodiment of the present application;

FIG. 9 is a schematic flow chart illustrating obtaining normalized features corresponding to a training image according to an embodiment of the present disclosure;

FIG. 10 is a schematic structural diagram of obtaining a normalized feature corresponding to the training image according to an embodiment of the present disclosure;

FIG. 11 is a schematic flow chart of obtaining a biological weight recognition model according to an embodiment of the present disclosure;

fig. 12 is a partial schematic flow chart of another biological weight recognition method provided in the embodiments of the present application;

fig. 13 is a schematic structural diagram of a biological weight recognition device according to an embodiment of the present application;

fig. 14 is a partial structural schematic view of another biological weight recognition device provided in an embodiment of the present application;

fig. 15 is a schematic structural diagram of a feature information module according to an embodiment of the present application;

fig. 16 is a partial structural schematic view of another biometric device provided in the embodiment of the present application;

fig. 17 is a partial schematic structural view of another biometric device according to an embodiment of the present application;

FIG. 18 is a block diagram of a normalized feature module according to an embodiment of the present disclosure;

FIG. 19 is a schematic structural diagram of a re-recognition model module provided in an embodiment of the present application;

fig. 20 is a partial structural schematic view of another biological weight recognition device provided in the embodiment of the present application;

fig. 21 is a schematic structural diagram of an electronic device according to an embodiment of the present application;

fig. 22 is a schematic structural diagram of a program product for implementing a biological weight recognition method according to an embodiment of the present application.

Detailed Description

The present application is further described with reference to the accompanying drawings and the detailed description, and it should be noted that, in the present application, the embodiments or technical features described below may be arbitrarily combined to form a new embodiment without conflict.

Referring to fig. 1 and 2, the present application provides a biological weight recognition method including steps S101 to S103.

The living body in the present application refers to a living individual having a certain volume and shape, which can be photographed by a hand-held or fixed camera. In some embodiments, the organism is, for example, a tiger, lion, wolf, leopard, elephant, orangutan, or the like.

Step S101: a query image is obtained.

The query image may be obtained from an image library stored in a storage medium in advance, may be manually input by a human, or may be retrieved and obtained from a server.

Step S102: and inputting the query image into a biological detection model to obtain the prediction detection information corresponding to the query image.

Step S103: and acquiring the predicted re-identification information corresponding to the query image and each retrieval image in the retrieval image set by using the query image, the corresponding predicted detection information and the biological weight identification model thereof, wherein the predicted re-identification information corresponding to the query image and the retrieval image is used for indicating whether the query image and the retrieval image contain the same organism or not.

Therefore, the prediction detection information corresponding to each query image is obtained based on the query image, the prediction re-identification information corresponding to each retrieval image in the retrieval image set is obtained based on the query image, the prediction detection information corresponding to the query image and the biological weight identification model, the prediction re-identification information corresponding to the query image and the retrieval image is used for indicating whether the query image and the retrieval image contain the same biological body, the re-identification efficiency is high, and the identification result accuracy is high.

Referring to fig. 3, in some embodiments, the method may further include steps S104 to S111.

Step S104: and constructing a preset detection network by utilizing the SE-ResNext50 network, the RPN network and the first header structure to the third header structure.

Step S105: acquiring a training data set, wherein each training data in the training data set comprises a training image and label detection information corresponding to the training image.

Step S106: and inputting the training images into the SE-ResNext50 network aiming at each training image in the training data set to obtain the characteristic information corresponding to the training images.

Step S107: and inputting the training image into the RPN to obtain ROI information corresponding to the training image.

Step S108: and inputting the characteristic information and the ROI information corresponding to the training image into the first head structure to obtain first classification information and first regression information corresponding to the training image.

Step S109: and inputting the characteristic information and the first regression information corresponding to the training image into a second head structure to obtain second classification information and second regression information corresponding to the training image.

Step S110: and inputting the feature information and the second regression information corresponding to the training image into the third head structure to obtain third classification information and third regression information corresponding to the training image as prediction detection information corresponding to the training image.

Step S111: and training the preset detection network by using the prediction detection information and the label detection information corresponding to the training image to obtain the organism detection model.

The first head structure to the third head structure are the same in structure, each head structure comprises an ROI Align layer, a classification branch and a regression branch, each classification branch comprises two full-connection layers, and each regression branch comprises two convolution layers and one full-connection layer.

Referring to fig. 4, in some embodiments, the residual unit branch in SE-ResNext50 consists of three consecutive convolutional layers and one SE block, the first layer input channel is 256, the output channel is 4, and the convolutional kernel size is 1 × 1; the second layer input channel is 4, the convolution kernel size is 3 × 3, and the output channel is 4; the third layer has an input channel of 4, a convolution kernel size of 1 × 1, and an output channel of 256; and finally passes through an SE module. Each residual unit is formed by 32 groups of residual unit branch structures in parallel, and finally additive operation is carried out on a channel.

Therefore, by utilizing the SE-ResNext50 network and combining the RPN network and the first head structure to the third head structure, a preset detection network is constructed, so that the network can be deeper and deeper, the problems of gradient dispersion and accuracy reduction in a deep network are solved, and the speed and the accuracy are controlled; acquiring a training data set, obtaining feature information corresponding to a training image based on each training image in the training data set, obtaining ROI information corresponding to the training image based on the feature information corresponding to the training image, obtaining first classification information and first regression information corresponding to the training image based on the feature information and the ROI information corresponding to the training image, obtaining second classification information and second regression information corresponding to the information image based on the feature information and the first regression information corresponding to the training image, obtaining third classification information and third regression information corresponding to the training image as pre-detection information corresponding to the training image based on the feature information and the second regression information corresponding to the training image, training a preset detection network based on the pre-detection information and the labeled detection information corresponding to the training image to obtain a biological detection model, wherein the obtained biological detection model has high accuracy, the detection precision is further improved by using the organism detection model to carry out organism detection; the first head structure to the third head structure are the same in structure, so that the detection speed is higher.

Referring to fig. 5, in some embodiments, the SE-resent 50 network includes second to fifth stage modules, and the step S106 may include steps S201 to S205.

Step S201: and inputting the training image into the second stage module to obtain a second feature map corresponding to the training image.

Step S202: and inputting the second feature map corresponding to the training image into a third stage module to obtain a third feature map corresponding to the training image.

Step S203: and inputting the third feature map corresponding to the training image into a fourth stage module to obtain a fourth feature map corresponding to the training image.

Step S204: and inputting the fourth feature map corresponding to the training image into the fifth stage module to obtain a fifth feature map corresponding to the training image.

Step S205: and constructing a feature pyramid by using the second feature map to the fifth feature map corresponding to the training image to obtain a plurality of feature maps which are arranged according to the feature map size sequence and correspond to the training image and serve as feature information corresponding to the training image.

Therefore, the characteristic pyramid is constructed, so that the speed of acquiring the characteristic information is higher, and the acquired characteristic information is more accurate.

Referring to fig. 6, in some specific embodiments, each training data in the training data set further includes labeled re-identification information corresponding to the training image and each retrieval image in the retrieval image set, where the labeled re-identification information corresponding to the training image and the retrieval image is used to indicate whether the training image and the retrieval image contain the same living body, and the method may further include steps S111 to S116.

Step S111: and constructing a preset re-identification network by using the ResNet50 network, the pooling layer and the BNNeck network.

Step S112: and obtaining the graph residual error characteristics corresponding to the training images by using the training images and the corresponding prediction detection information thereof and the ResNet50 network.

Step S113: and inputting the graph residual error characteristics corresponding to the training images into the pooling layer to obtain pooling characteristics corresponding to the training images.

Step S114: inputting the pooling features corresponding to the training images into the BNNeck network to obtain the normalization features corresponding to the training images.

Step S115: and acquiring the prediction re-identification information corresponding to the training images and the retrieval images in the retrieval image set respectively by utilizing the normalization characteristics corresponding to the training images.

Step S116: and training the preset re-recognition network by using the predicted re-recognition information corresponding to the training images and the retrieval images in the retrieval image set respectively, and the labeled re-recognition information corresponding to the training images and the retrieval images in the retrieval image set respectively to obtain the biological weight recognition model.

Therefore, a preset re-recognition network is constructed based on a ResNet50 network, a pooling layer and a BNNeck network, a graph residual error feature corresponding to a training image is obtained based on the training image and the corresponding pre-detection information thereof and the ResNet50 network, a pooling feature corresponding to the training image is obtained based on the graph residual error feature corresponding to the training image, a normalization feature corresponding to the training image is obtained based on the pooling feature corresponding to the training image, prediction re-recognition information corresponding to the training image and each retrieval image in the retrieval image set is obtained based on the normalization feature corresponding to the training image, the preset re-recognition network is trained based on the prediction re-recognition information corresponding to the training image and each retrieval image in the retrieval image set, the training image and the labeling re-recognition information corresponding to each retrieval image in the retrieval image set, the calculation process of the re-recognition model obtained by the method is faster, the result is easier to converge, and the accuracy is higher when the organism weight recognition model obtained by the method is used for organism recognition.

Referring to fig. 7, in some embodiments, the method may further include steps S117 to S118.

Step S117: and respectively performing data augmentation on each training image by using the prediction detection information corresponding to each training image to obtain augmented images corresponding to each training image, and storing the augmented images into the training data set to be used as new training images.

Step S118: and reordering all the retrieval images in the retrieval image set based on the augmentation images corresponding to the training images to obtain ordering information corresponding to the retrieval images in the retrieval image set.

In a specific application scenario, a rectangular frame is randomly selected from the training image, and the original pixels of the training image are erased at random positions by using random values. The specific method comprises the following steps: inputting a training image I, erasing probability p and erasing area proportion ranging from slTo shAnd the aspect ratio probability range is from r1To r2. Firstly, determining whether a picture needs to be erased or not according to the probability p1>p does not process the picture, otherwise, it needs to be erased. Based on the input training image I

To the length and width W and H of the training image, the area S can then be obtained. According to Rand(s)l,sh) S gets the erased area SeLength of erase area (H)e) Width of erase area (W)e) Obtained according to the following formula:

wherein r iseThe length-width ratio of the erasure area is represented, and the meaning of the rest characters is described in the application scenario and will not be described herein.

Referring to fig. 8, the step S115 includes a step S301.

Step S301: and acquiring prediction re-identification information corresponding to the training images and the retrieval images in the retrieval image set respectively based on the ranking information corresponding to the retrieval images in the retrieval image set by using the normalization features corresponding to the training images.

Therefore, data amplification is carried out on the training images, the amplification images corresponding to the training images are obtained and stored in the training data set to serve as new training images, shielding with different degrees is added to the images, the risk of model overfitting can be reduced, and meanwhile certain robustness is achieved on shielding; and reordering all the retrieval images in the retrieval image set based on the augmented images corresponding to the training images to obtain ordering information corresponding to the retrieval images in the retrieval image set, and acquiring prediction re-identification information corresponding to the training images and the retrieval images in the retrieval image set respectively by utilizing the normalization features corresponding to the training images so as to improve the identification effect.

Referring to fig. 9 and 10, in some embodiments, the bnnack network includes a normalization layer and a full connectivity layer, and the step S114 may include steps S401 to S402.

Step S401: and inputting the pooling features corresponding to the training images into the normalization layer to obtain the normalization features corresponding to the training images, wherein the pooling features corresponding to the training images are used for calculating a first loss value and a second loss value corresponding to the training images.

Step S402: and inputting the normalized features corresponding to the training images into the full-link layer to obtain full-link features corresponding to the training images, wherein the full-link features corresponding to the training images are used for calculating third loss values corresponding to the training images.

And the first loss value, the second loss value and the third loss value corresponding to each training image are used for training the preset re-recognition network.

In a specific application scenario, to increase clustering performance between positive samples, we add a Center loss:

in the formula, τcRepresenting the central loss value, B representing the number of samples in the training data set, ftjRepresenting pooled features corresponding to jth training data, cyjAnd representing the average characteristic of all training data characteristics of the category corresponding to the jth training data.

Therefore, the pooling features corresponding to the training images are input into the BNNeck network to obtain the normalization features corresponding to the training images, and the clustering performance of the training images is improved, so that the accuracy of the preset re-recognition network obtained based on the training images is higher.

In some embodiments, the fully connected features corresponding to the training images are further used for calculating label smoothness values corresponding to the training images; and the first loss value, the second loss value, the third loss value and the label smoothness value corresponding to each training image are used for training the preset re-recognition network.

With continued reference to FIG. 10, in one particular application scenario, the tag smoothness value is calculated using the following equation:

in the formula PiRepresenting the probability distribution of the ith class, K representing the total number of classes of the multi-class, epsilon being a smaller hyperparameter, y representing the true label of the ith class, and calculating the classification loss together with the label smoothness value.

Therefore, the label smooth training strategy is added, so that the generalization capability of the re-recognition training model is increased, overfitting is prevented, and the deep stability of the re-recognition model is maintained.

Referring to fig. 11, in some embodiments, step S116 may include step S501.

Step S501: and training the preset re-recognition network by using a preset learning rate strategy by using the prediction re-recognition information corresponding to the training images and the retrieval images in the retrieval image set respectively and the labeling re-recognition information corresponding to the training images and the retrieval images in the retrieval image set respectively to obtain the biological weight recognition model.

Wherein the preset learning rate strategy is used for indicating the learning rate corresponding to each epoch, and the first to Nth epochs1Gradually increasing the learning rate corresponding to the epochs to a first preset learning rate Nth1+1 to Nth2The learning rate corresponding to the epochs is a second preset learning rate, Nth2+1 to Nth3The learning rate corresponding to the epoch is the third predetermined learning rate, Nth3+1 to Nth4The learning rate corresponding to the epochs is a fourth preset learning rate, the first preset learning rate is smaller than the second preset learning rate, the second preset learning rate is larger than the third preset learning rate, and the third preset learning rate is larger than the fourth preset learning rate. Therefore, the preset re-recognition network is trained by using the preset learning rate strategy, so that the phenomenon of over-fitting in advance in the initial stage when the biological weight recognition model is obtained through slow training is facilitated, and the deep stability of the biological weight recognition model is kept.

In a specific application scenario, the learning rate may be calculated using the following formula:

referring to fig. 12, in some embodiments, the method may further include steps S119 to S122.

Step S119: and acquiring target category information.

Step S120: determining at least one target retrieval image from the set of retrieval images based on the target category information.

Step S121: and detecting whether a target organism corresponding to the target category information exists in the query image or not based on the prediction re-identification information corresponding to the query image and each target retrieval image.

Step S122: and when the target organism exists in the query image, acquiring the current state information and/or the spatial distribution trend information of the target organism by using the query image.

Therefore, at least one target retrieval image is determined based on the target category information, whether a target organism corresponding to the target category information exists in the query image is detected based on the query image and the prediction re-identification information corresponding to each target retrieval image, and when the target organism exists in the query image, the current state information and/or the spatial distribution trend information of the target organism are/is acquired by using the query image, so that the efficiency and the accuracy of organism identification are improved.

Referring to fig. 13, the present application provides a biological weight recognition apparatus, and a specific implementation manner of the biological weight recognition apparatus is consistent with the implementation manner and the achieved technical effect described in the embodiment of the model training method, and a part of the detailed description is omitted.

The device comprises: an inquiry image module 101, configured to obtain an inquiry image; a prediction detection module 102, configured to input the query image into a biological detection model, so as to obtain prediction detection information corresponding to the query image; an image re-identification module 103, configured to obtain, by using the query image and the prediction detection information and the biological weight identification model corresponding to the query image, prediction re-identification information corresponding to each retrieval image in a retrieval image set, where the prediction re-identification information corresponding to the query image and the retrieval image is used to indicate whether the query image and the retrieval image contain the same biological body.

Referring to fig. 14, in some embodiments, the biological weight recognition device may further include: the preset detection module 104 is used for constructing a preset detection network by utilizing the SE-ResNext50 network, the RPN network and the first header structure to the third header structure; a training data module 105, configured to obtain a training data set, where each training data in the training data set includes a training image and label detection information corresponding to the training image; a feature information module 106, configured to, for each training image in the training data set, input the training image into the SE-resenxt 50 network to obtain feature information corresponding to the training image; an ROI information module 107, configured to input the training image into the RPN network, so as to obtain ROI information corresponding to the training image; a first classification module 108, configured to input the feature information and ROI information corresponding to the training image into the first head structure, so as to obtain first classification information and first regression information corresponding to the training image; a second classification module 109, configured to input the feature information and the first regression information corresponding to the training image into a second header structure, so as to obtain second classification information and second regression information corresponding to the training image; a detection information module 110, configured to input feature information and second regression information corresponding to the training image into the third header structure, and obtain third classification information and third regression information corresponding to the training image as prediction detection information corresponding to the training image; and a detection model module 111, configured to train the preset detection network by using the prediction detection information and the label detection information corresponding to the training image, to obtain the biological detection model.

Referring to fig. 15, in some embodiments, the SE-resent 50 network may include second to fifth stage modules, and the feature information module 106 includes: a second feature map unit 201, configured to input the training image into the second stage module to obtain a second feature map corresponding to the training image; a third feature map unit 202, configured to input the second feature map corresponding to the training image into a third stage module, so as to obtain a third feature map corresponding to the training image; a fourth feature map unit 203, which inputs the third feature map corresponding to the training image into a fourth stage module to obtain a fourth feature map corresponding to the training image; a fifth feature map unit 204, configured to input a fourth feature map corresponding to the training image into the fifth stage module, so as to obtain a fifth feature map corresponding to the training image; a feature information unit 205, configured to construct a feature pyramid by using the second to fifth feature maps corresponding to the training image, and obtain a plurality of feature maps corresponding to the training image and arranged according to a feature map size order as feature information corresponding to the training image.

Referring to fig. 16, in some specific embodiments, each training data in the training data set may further include labeled re-identification information corresponding to the training image and each retrieval image in the retrieval image set, where the labeled re-identification information corresponding to the training image and the retrieval image is used to indicate whether the training image and the retrieval image contain the same organism, and the biological weight recognition apparatus may further include: the preset network module 111 is used for constructing a preset re-identification network by using a ResNet50 network, a pooling layer and a BNNeck network; a graph residual feature module 112, configured to obtain a graph residual feature corresponding to the training image by using the training image and the prediction detection information corresponding to the training image and the ResNet50 network; a pooling feature module 113, configured to input the graph residual features corresponding to the training images into the pooling layer, so as to obtain pooling features corresponding to the training images; a normalized feature module 114, configured to input the pooled features corresponding to the training image into the BNNeck network, so as to obtain normalized features corresponding to the training image; a prediction re-identification module 115, configured to obtain, by using the normalization features corresponding to the training images, prediction re-identification information corresponding to the training images and the retrieval images in the retrieval image set, respectively; and a re-recognition model module 116, configured to train the preset re-recognition network by using the predicted re-recognition information corresponding to the training image and each of the retrieval images in the retrieval image set, and the labeled re-recognition information corresponding to the training image and each of the retrieval images in the retrieval image set, respectively, to obtain the biological weight recognition model.

Referring to fig. 17, in some embodiments, the biological weight recognition device may further include: an augmented image module 117, configured to perform data augmentation on each training image respectively by using prediction detection information corresponding to each training image, to obtain an augmented image corresponding to each training image, and store the augmented image in the training data set as a new training image; a ranking information module 118, configured to reorder all the search images in the search image set based on the augmented image corresponding to each training image, to obtain ranking information corresponding to each search image in the search image set; a prediction re-identification module 115, configured to obtain, by using the normalized features corresponding to the training images, prediction re-identification information corresponding to the training images and each of the search images in the search image set, where the prediction re-identification module includes: a prediction re-identification unit 301, configured to obtain, based on ranking information corresponding to each of the search images in the search image set, prediction re-identification information corresponding to each of the search images in the search image set and the training image, respectively, by using a normalized feature corresponding to the training image.

Referring to fig. 18, in some embodiments, the BNNeck network may include a normalization layer and a full connectivity layer, the normalization feature module 114 including: a normalized feature unit 401, configured to input the pooled features corresponding to the training images into the normalization layer, so as to obtain normalized features corresponding to the training images, where the pooled features corresponding to the training images are used to calculate a first loss value and a second loss value corresponding to the training images; a full-connection feature unit 402, configured to input the normalized feature corresponding to the training image into the full-connection layer, so as to obtain a full-connection feature corresponding to the training image, where the full-connection feature corresponding to the training image is used to calculate a third loss value corresponding to the training image; and the first loss value, the second loss value and the third loss value corresponding to each training image are used for training the preset re-recognition network.

In some embodiments, the fully connected feature unit is further configured to calculate a label smoothness value corresponding to the training image; and the first loss value, the second loss value, the third loss value and the label smoothness value corresponding to each training image are used for training the preset re-recognition network.

Referring to fig. 19, in some embodiments, the re-recognition model module 116 may include: a re-recognition model unit 501; wherein the preset learning rate strategy is used for indicating the learning rate corresponding to each epoch, and the first to Nth epochs1Gradually increasing the learning rate corresponding to the epochs to a first preset learning rate Nth1+1 to Nth2The learning rate corresponding to the epochs is a second preset learning rate, Nth2+1 to Nth3The learning rate corresponding to the epoch is the third predetermined learning rate, Nth3+1 to Nth4The learning rate corresponding to the epochs is a fourth preset learning rate, the first preset learning rate is smaller than the second preset learning rate, the second preset learning rate is larger than the third preset learning rate, and the third preset learning rate is larger than the fourth preset learning rate.

Referring to fig. 20, in some embodiments, the biological weight recognition device may further include: a target category module 119 for acquiring target category information; a target retrieval module 120 for determining at least one target retrieval image from the retrieval image set based on the target category information; a query detection module 121, configured to detect whether a target organism corresponding to the target category information exists in the query image based on the query image and the prediction re-identification information corresponding to each target retrieval image; a state distribution module 122, configured to, when the target biological object exists in the query image, obtain current state information and/or spatial distribution trend information of the target biological object by using the query image.

Referring to fig. 21, an embodiment of the present application further provides an electronic device 200, where the electronic device 200 includes at least one memory 210, at least one processor 220, and a bus 230 connecting different platform systems.

The memory 210 may include readable media in the form of volatile memory, such as Random Access Memory (RAM)211 and/or cache memory 212, and may further include Read Only Memory (ROM) 213.

The memory 210 further stores a computer program, and the computer program can be executed by the processor 220, so that the processor 220 executes the steps of the biological weight recognition method in the embodiment of the present application, and the specific implementation manner of the method is consistent with the implementation manner and the achieved technical effect described in the embodiments of the biological weight recognition method, and some details are not repeated.

Memory 210 may also include a utility 214 having at least one program module 215, such program modules 215 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.

Accordingly, the processor 220 may execute the computer programs described above, and may execute the utility 214.

Bus 230 may be a local bus representing one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, or any other type of bus structure.

The electronic device 200 may also communicate with one or more external devices 240, such as a keyboard, pointing device, bluetooth device, etc., and may also communicate with one or more devices capable of interacting with the electronic device 200, and/or with any devices (e.g., routers, modems, etc.) that enable the electronic device 200 to communicate with one or more other computing devices. Such communication may be through input-output interface 250. Also, the electronic device 200 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the Internet) via the network adapter 260. The network adapter 260 may communicate with other modules of the electronic device 200 via the bus 230. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the electronic device 200, including but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID systems, tape drives, and data backup storage platforms, to name a few.

The embodiments of the present application further provide a computer-readable storage medium, where the computer-readable storage medium is used to store a computer program, and when the computer program is executed, the steps of the biological weight recognition method in the embodiments of the present application are implemented, and a specific implementation manner of the steps is consistent with the implementation manner and the achieved technical effect described in the embodiments of the biological weight recognition method, and some contents are not repeated.

The computer readable storage medium is used for storing a computer program which, when executed, implements the steps of the biological weight recognition method in the embodiment of the present application, or a biological weight recognition model recognized by the biological weight recognition method in the embodiment of the present application.

Fig. 22 shows a program product 300 provided in this embodiment for implementing the above-described biological weight recognition method, which may employ a portable compact disc read only memory (CD-ROM) and include program codes, and may be run on a terminal device, such as a personal computer. However, the program product 300 of the present application is not so limited, and in the present application, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. Program product 300 may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.

A computer readable storage medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable storage medium may also be any readable medium that can communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. Program code embodied on a readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing. Program code for carrying out operations of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the C language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).

The method for identifying the weight of the living body based on the deep learning target is designed based on the deep learning target detection method, and is more stable and higher in accuracy. Therefore, the method for monitoring the biological geographic spatial distribution trend and tracking the population has high accuracy and good application prospect.

While the present application is described in terms of various aspects, including exemplary embodiments, the principles of the invention should not be limited to the disclosed embodiments, but are also intended to cover various modifications, equivalents, and alternatives falling within the spirit and scope of the invention as defined by the appended claims.

33页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:图片查重方法、装置和计算机可读存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!