Image semantic description method and device, computing equipment and computer storage medium

文档序号:830437 发布日期:2021-03-30 浏览:5次 中文

阅读说明:本技术 图像语义描述方法、装置、计算设备及计算机存储介质 (Image semantic description method and device, computing equipment and computer storage medium ) 是由 王伟豪 于 2019-09-29 设计创作,主要内容包括:本发明实施例涉及数据处理技术领域,公开了一种图像语义描述方法、装置、计算设备及计算机存储介质,该方法包括:通过改进Inception-v3模型提取待描述图像的视觉特征,所述改进Inception-v3模型包括多个嵌套卷积层,所述多个嵌套卷积层之间并联连接,所述多个嵌套卷积层中的最后一个嵌套卷积层使用残差结构方式连接;将所述待描述图像的视觉特征输入至语义描述模型中,得到所述待描述图像的语义描述;所述语义描述模型是通过多组训练样本训练双层嵌套LSTM得到的,所述多组训练样本中的每一组均包括:样本图像的视觉特征和所述样本图像对应的语义描述词向量。通过上述方式,本发明实施例实现了图像的语义描述。(The embodiment of the invention relates to the technical field of data processing, and discloses an image semantic description method, an image semantic description device, a computing device and a computer storage medium, wherein the method comprises the following steps: extracting visual features of an image to be described through a modified inclusion-v 3 model, wherein the modified inclusion-v 3 model comprises a plurality of nested convolutional layers which are connected in parallel, and the last nested convolutional layer in the plurality of nested convolutional layers is connected in a residual error structure mode; inputting the visual features of the image to be described into a semantic description model to obtain semantic description of the image to be described; the semantic description model is obtained by training double-layer nested LSTM through a plurality of groups of training samples, wherein each group of the plurality of groups of training samples comprises: the visual characteristics of the sample image and the semantic descriptor vector corresponding to the sample image. Through the mode, the semantic description of the image is realized.)

1. An image semantic description method, characterized by comprising:

extracting visual features of an image to be described through a modified inclusion-v 3 model, wherein the modified inclusion-v 3 model comprises a plurality of nested convolutional layers which are connected in parallel, and the last nested convolutional layer in the plurality of nested convolutional layers is connected in a residual error structure mode;

inputting the visual features of the image to be described into a semantic description model to obtain semantic description of the image to be described; the semantic description model is obtained by training double-layer nested LSTM through a plurality of groups of training samples, wherein each group of the plurality of groups of training samples comprises: the visual characteristics of the sample image and the semantic descriptor vector corresponding to the sample image.

2. The method according to claim 1, wherein before extracting the visual features of the image to be described by the modified inclusion-v 3 model, the method further comprises:

obtaining the plurality of groups of training samples;

and training the double-layer nested LSTM according to the multiple groups of training samples to obtain the semantic description model.

3. The method of claim 2, wherein obtaining the plurality of sets of training samples comprises:

acquiring a sample image and semantic description corresponding to the sample image;

extracting visual features of the sample image through the modified inclusion-v 3 model;

extracting a word vector corresponding to the semantic description according to a word vector table, wherein the word vector table is obtained by training a word2vec model;

and taking the visual features and the word vectors corresponding to the visual features as a group of training samples to obtain a plurality of groups of training samples.

4. The method of claim 3, wherein extracting the word vector corresponding to the semantic description through the word2vec model comprises:

segmenting the semantic description and coding each segmented word by using unique hot coding;

and converting the code into a word vector through a word vector table, wherein the word vector table is obtained by training a word2vec model.

5. The method of claim 2, wherein training a double-layer nested LSTM from the plurality of sets of training samples to obtain the semantic description model comprises:

generating a time sequence according to the word vectors in each group of training samples;

inputting the visual features in each set of training samples into a first LSTM layer;

sequentially inputting the word vectors into the first LSTM layer according to the time sequence, so that the first LSTM layer learns the dependency relationship between the visual features and the word vectors in each group of training samples, and a first external state of each group of training samples is output;

inputting the first external state into a second LSTM layer, so that the second LSTM layer continuously learns the dependency relationship between the visual features and the word vectors in each training sample set according to the first external state to output a second external state of each training sample set;

weighting the second external state through a full connection layer to obtain a weighting result of each group of training samples;

performing classification maximization output on the weighting result through a normalization index function softmax classifier to obtain an output result corresponding to each group of training samples; calculating a loss function value according to the output result;

updating the weight of the double-layer nested LSTM according to the loss function value until the loss function value is minimum;

and taking the double-layer nested LSTM model with the minimum loss function value as the semantic description model.

6. The method of claim 5, wherein the dual-layer nested LSTM comprises an Attention layer;

prior to entering the visual features in each set of training samples into the first LSTM layer, the method further comprises:

inputting the visual features into an Attention module to enable the Attention module to determine the proportion of each visual feature in all the visual features;

sequentially inputting the word vectors into the first LSTM layer according to the time sequence, so that the first LSTM layer learns the dependency relationship between the visual features and the word vectors in each set of training samples to output a first external state of each set of training samples, including:

and sequentially inputting the word vectors into the first LSTM layer according to the time sequence, so that the first LSTM layer learns the dependency relationship between the visual features and the word vectors in each group of training samples according to the specific gravity to output a first external state of each group of training samples.

7. The method of claim 5, wherein the first LSTM layer comprises first and second LSTM units;

sequentially inputting the word vectors into the first LSTM layer according to the time sequence, so that the first LSTM layer learns the dependency relationship between the visual features and the word vectors in each set of training samples to output a first external state of each set of training samples, including:

learning, by the first LSTM unit, a dependency between a visual feature and a word vector in each set of training samples to output a first state;

learning, by the second LSTM unit, a dependency between the first state of each set of training samples and the word vector to output a second state of each set of training samples; and combining the first state and the second state to obtain a first external state of each group of training samples.

8. The method of claim 7, wherein the first LSTM unit includes a forgetting gate, an input gate, and an output gate;

the learning, by the first LSTM layer, a dependency between a visual feature and a word vector in each set of training samples to output a first state for each set of training samples, comprising: learning a dependency relationship between the visual features and the word vectors in each set of training samples according to the following formula to output a first state of each set of training samples;

ft=σ(Wfxt+Ufht-1+bf)

it=σ(Wixt+Uiht-1+bi)

ot=σ(Woxt+Uoht-1+bo)

ht=ot⊙tanh(ct)

wherein f istIndicating forgetting gate, itDenotes an input gate, otRepresents an output gate, ctRepresents a status unit, htRepresents a first state, WiWeight matrix representing input gates, biIndicating an offset term of the input gate, WfWeight matrix representing forgetting gate, bfOffset term, W, representing a forgetting gateoWeight matrix representing output gates, boRepresenting a bias term for the output gate, the gate activation function being sThe value field of igmoid (σ) is (0, 1), the output activation function is a tanh function, which indicates a vector element product.

9. An image semantic description apparatus, characterized by comprising:

the extraction module is used for extracting visual features of an image to be described through an improved inclusion-v 3 model, the improved inclusion-v 3 model comprises a plurality of nested convolutional layers, the nested convolutional layers are connected in parallel, and the last nested convolutional layer in the nested convolutional layers is connected in a residual error structure mode;

the input module is used for inputting the visual characteristics of the image to be described into a semantic description model to obtain semantic description of the image to be described; the semantic description model is obtained by training double-layer nested LSTM through a plurality of groups of training samples, wherein each group of the plurality of groups of training samples comprises: the visual characteristics of the sample image and the semantic descriptor vector corresponding to the sample image.

10. A computing device, comprising: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus;

the memory is used for storing at least one executable instruction, and the executable instruction causes the processor to execute the corresponding operation of the image semantic description method according to any one of claims 1-8.

Technical Field

The embodiment of the invention relates to the technical field of image processing, in particular to an image semantic description method, an image semantic description device, computing equipment and a computer storage medium.

Background

The semantic description of the image has important significance for image retrieval, image processing and the like. Traditional image semantic description methods are classified into template-based and search-based methods. The template-based method firstly utilizes a regional feature extraction scheme to extract regional features such as the category and the attribute of the picture, and then fills the extracted features into the spaces of a set sentence template, thereby completing the description of the image. The method has the defects that the sentence patterns of the generated sentences are monotonous and a large amount of semantic information is lost.

The extracted features and the images in the constructed search image library are operated by a similarity algorithm based on a search method to find out the pictures which accord with the algorithm, and the images are described in advance by semantics, so that the pictures can be output only by properly fine-adjusting the description of the images. The semantic description effect of the method under the fixed or similar scene is good, but the method is very dependent on the constructed search image library and has poor robustness

Disclosure of Invention

In view of the above problems, embodiments of the present invention provide an image semantic description method, apparatus, computing device and computer storage medium, which overcome or at least partially solve the above problems.

According to an aspect of the embodiments of the present invention, there is provided an image semantic description method, including:

extracting visual features of an image to be described through a modified inclusion-v 3 model, wherein the modified inclusion-v 3 model comprises a plurality of nested convolutional layers which are connected in parallel, and the last nested convolutional layer in the plurality of nested convolutional layers is connected in a residual error structure mode;

inputting the visual features of the image to be described into a semantic description model to obtain semantic description of the image to be described; the semantic description model is obtained by training double-layer nested LSTM through a plurality of groups of training samples, wherein each group of the plurality of groups of training samples comprises: the visual characteristics of the sample image and the semantic descriptor vector corresponding to the sample image.

Optionally, before extracting the visual features of the image to be described by improving the inclusion-v 3 model, the method further includes:

obtaining the plurality of groups of training samples;

and training the double-layer nested LSTM according to the multiple groups of training samples to obtain the semantic description model.

Optionally, the acquiring multiple sets of training samples includes:

acquiring a sample image and semantic description corresponding to the sample image;

extracting visual features of the sample image through the modified inclusion-v 3 model;

extracting a word vector corresponding to the semantic description according to a word vector table, wherein the word vector table is obtained by training a word2vec model;

and taking the visual features and the word vectors corresponding to the visual features as a group of training samples to obtain a plurality of groups of training samples.

Optionally, the extracting, by using the word2vec model, a word vector corresponding to the semantic description includes:

segmenting the semantic description and coding each segmented word by using unique hot coding;

and converting the code into a word vector through a word vector table, wherein the word vector table is obtained by training a word2vec model.

Optionally, the training of the double-layer nested LSTM according to the multiple sets of training samples to obtain the semantic description model includes:

generating a time sequence according to the word vectors in each group of training samples;

inputting the visual features in each set of training samples into a first LSTM layer;

sequentially inputting the word vectors into the first LSTM layer according to the time sequence, so that the first LSTM layer learns the dependency relationship between the visual features and the word vectors in each group of training samples, and a first external state of each group of training samples is output;

inputting the first external state into a second LSTM layer, so that the second LSTM layer continuously learns the dependency relationship between the visual features and the word vectors in each training sample set according to the first external state to output a second external state of each training sample set;

weighting the second external state through a full connection layer to obtain a weighting result of each group of training samples;

performing classification maximization output on the weighting result through a normalization index function softmax classifier to obtain an output result corresponding to each group of training samples; calculating a loss function value according to the output result;

updating the weight of the double-layer nested LSTM according to the loss function value until the loss function value is minimum;

and taking the double-layer nested LSTM model with the minimum loss function value as the semantic description model.

Optionally, the dual-layer nested LSTM includes an Attention layer; prior to entering the visual features in each set of training samples into the first LSTM layer, the method further comprises:

inputting the visual features into an Attention module to enable the Attention module to determine the proportion of each visual feature in all the visual features;

sequentially inputting the word vectors into the first LSTM layer according to the time sequence, so that the first LSTM layer learns the dependency relationship between the visual features and the word vectors in each set of training samples to output a first external state of each set of training samples, including:

and sequentially inputting the word vectors into the first LSTM layer according to the time sequence, so that the first LSTM layer learns the dependency relationship between the visual features and the word vectors in each group of training samples according to the specific gravity to output a first external state of each group of training samples.

Optionally, the first LSTM layer includes a first LSTM unit and a second LSTM unit; sequentially inputting the word vectors into the first LSTM layer according to the time sequence, so that the first LSTM layer learns the dependency relationship between the visual features and the word vectors in each set of training samples to output a first external state of each set of training samples, including:

learning, by the first LSTM unit, a dependency between a visual feature and a word vector in each set of training samples to output a first state;

learning, by the second LSTM unit, a dependency between the first state of each set of training samples and the word vector to output a second state of each set of training samples; and combining the first state and the second state to obtain a first external state of each group of training samples.

Optionally, the first LSTM unit includes a forgetting gate, an input gate, and an output gate; the learning, by the first LSTM layer, a dependency between a visual feature and a word vector in each set of training samples to output a first state for each set of training samples, comprising: learning a dependency relationship between the visual features and the word vectors in each set of training samples according to the following formula to output a first state of each set of training samples;

ft=σ(Wfxt+Ufht-1+bf)

it=σ(Wixt+Uiht-1+bi)

ot=σ(Woxt+Uoht-1+bo)

ht=ot⊙tanh(ct)

wherein f istIndicating forgetting gate, itDenotes an input gate, otRepresents an output gate, ctRepresents a status unit, htRepresents a first state, WiWeight matrix representing input gates, biIndicating an offset term of the input gate, WfWeight matrix representing forgetting gate, bfOffset term, W, representing a forgetting gateoWeight matrix representing output gates, boA bias term representing the output gate, the gate activation function being sigmoid (σ), whose value range is (0, 1), the output activation function is a tanh function, which indicates a vector element product.

According to another aspect of the embodiments of the present invention, there is provided an image semantic description apparatus, including:

the extraction module is used for extracting visual features of an image to be described through an improved inclusion-v 3 model, the improved inclusion-v 3 model comprises a plurality of nested convolutional layers, the nested convolutional layers are connected in parallel, and the last nested convolutional layer in the nested convolutional layers is connected in a residual error structure mode;

the input module is used for inputting the visual characteristics of the image to be described into a semantic description model to obtain semantic description of the image to be described; the semantic description model is obtained by training double-layer nested LSTM through a plurality of groups of training samples, wherein each group of the plurality of groups of training samples comprises: the visual characteristics of the sample image and the semantic descriptor vector corresponding to the sample image.

According to still another aspect of an embodiment of the present invention, there is provided a computing device including: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus;

the memory is used for storing at least one executable instruction, and the executable instruction enables the processor to execute the operation corresponding to the image semantic description method.

According to a further aspect of the embodiments of the present invention, a computer storage medium is provided, where the computer storage medium stores at least one executable instruction, and the computer executable instruction may perform an operation corresponding to the image semantic description method in any of the above method embodiments.

According to the embodiment of the invention, the visual features of the image to be described are extracted by improving the inclusion-v 3 model, and the semantic description of the image to be described is obtained through the semantic description model, so that the semantic description of the image to be described is realized. The improved Inception-v3 model further improves the existing Inception-v3 model, so that the feature extraction effect of the Inception-v3 model is optimized, and the extracted visual features are more reliable; the semantic description model is obtained through training of semantic descriptions corresponding to a large number of sample images and sample images, so that the semantic description model comprises the corresponding relation between the visual features of the sample images and the semantic descriptions, and the semantic description of the image to be described according to the semantic description model is more accurate and reliable.

The foregoing description is only an overview of the technical solutions of the embodiments of the present invention, and the embodiments of the present invention can be implemented according to the content of the description in order to make the technical means of the embodiments of the present invention more clearly understood, and the detailed description of the present invention is provided below in order to make the foregoing and other objects, features, and advantages of the embodiments of the present invention more clearly understandable.

Drawings

Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:

FIG. 1 is a flow chart illustrating a method for semantic description of an image according to a first embodiment of the present invention;

FIG. 2 is a flowchart illustrating a method for semantic description of an image according to a second embodiment of the present invention;

FIG. 3 is a schematic structural diagram of a first LSTM unit in an image semantic description method according to a second embodiment of the present invention;

FIG. 4 is a functional block diagram of an image semantic description apparatus according to a third embodiment of the present invention;

fig. 5 is a schematic structural diagram of a computing device according to a fourth embodiment of the present invention.

Detailed Description

Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the invention are shown in the drawings, it should be understood that the invention can be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.

Fig. 1 shows a flowchart of an image semantic description method according to a first embodiment of the present invention, and as shown in fig. 1, the method includes the following steps:

step 110: the visual features of the image to be described are extracted through an improved inclusion-v 3 model, the improved inclusion-v 3 model comprises a plurality of nested convolutional layers, the nested convolutional layers are connected in parallel, and the last nested convolutional layer in the nested convolutional layers is connected in a residual error structure mode.

The inclusion-v 3 model is a deep convolutional neural network architecture, which is different from the inclusion-v 1 and the inclusion-v 2 in that the inclusion-v 3 model can decompose a convolution kernel, for example, a 3 × 3 convolution kernel can be decomposed into 1 × 3 convolution kernels and 3 × 1 convolution kernels, and extracting picture features by using the decomposed convolution kernels can improve the calculation speed; in addition, the inclusion-v 3 model can also split one convolutional layer into two convolutional layers, i.e., a nested convolutional layer is formed, so that the depth of the deep convolutional neural network is further increased, and the nonlinearity of the network is increased. The embodiment of the invention further improves the Incep-v 3 model, changes the series structure among a plurality of nested convolutional layers in the existing Incep-v 3 model into parallel connection, and can extract picture features by using various convolutional kernel scanning pictures compared with the mode that the series connection is carried out only by using fixed convolutional kernel scanning pictures. In addition, the nested convolutional layers of the existing inclusion-v 3 model are directly connected, that is, the output of the previous nested convolutional layer is used as the input of the next nested convolutional layer. According to the embodiment of the invention, the last two nested convolutional layers of the existing increment-v 3 model are changed into a residual error structure, namely the output of the previous layer is not only used as the input of the next layer, but also accumulated with the input of the previous layer when the last linear layer is input, so as to be used as the input of the linear layer, and thus the problem of gradient disappearance caused by too deep depth of the increment-v 3 model is avoided.

The image to be described is input into a modified inclusion-v 3 model, a plurality of convolution layers in the model extract the features of the image, and the features extracted by the previous convolution layer are used as the input of the next convolution layer. In some embodiments, each convolutional layer is connected to a pooling layer for dimensionality reduction of the extracted features. Each convolution layer comprises a plurality of convolution kernels and is used for scanning the image to obtain the visual characteristics of the image.

Step 120: and inputting the visual characteristics of the image to be described into a semantic description model to obtain semantic description of the image to be described.

In this step, the semantic description model is obtained by training a double-layer nested Long-Short Term Memory neural network (LSTM) with a plurality of sets of training samples, where each set of training samples includes visual features of a sample image and semantic descriptor vectors corresponding to the sample image. Wherein the two-layer nested LSTM comprises two LSTM layers, each LSTM layer comprising two nested LSTM units. After training of the semantic description model is completed, weights connected among all neurons in the semantic description model are obtained, the weights are acted with visual features of an input image to be described, word vectors corresponding to the output semantic description can be obtained, the word vectors are converted into semantic descriptions according to the corresponding relation between the word vectors and the semantic descriptions, and the semantic descriptions are corresponding to the image with description. For a specific training process of the semantic description model, please refer to the description of the next embodiment, which is not repeated herein.

According to the embodiment of the invention, the visual features of the image to be described are extracted by improving the inclusion-v 3 model, and the semantic description of the image to be described is obtained through the semantic description model, so that the semantic description of the image to be described is realized. The improved Inception-v3 model further improves the existing Inception-v3 model, so that the feature extraction effect of the Inception-v3 model is optimized, and the extracted visual features are more reliable; the semantic description model is obtained through training of semantic descriptions corresponding to a large number of sample images and sample images, so that the semantic description model comprises the corresponding relation between the visual features of the sample images and the semantic descriptions, and the semantic description of the image to be described according to the semantic description model is more accurate and reliable.

Fig. 2 shows a flowchart of an image semantic description method according to a second embodiment of the present invention, and as shown in fig. 2, the method includes the following steps:

step 210: multiple sets of training samples are obtained.

In this step, a set of training samples includes a sample image and semantic description corresponding to the sample image, and thus, the training samples can be obtained by obtaining the sample image and the semantic description corresponding to the sample image. The visual features of each sample image are extracted by improving the inclusion-v 3 model, wherein the specific description of the improved inclusion-v 3 model refers to the description of step 110 in the first embodiment, which is not repeated herein. And extracting word vectors corresponding to the semantic descriptions according to the word vector table, wherein each word vector corresponds to a participle in the semantic descriptions. In an embodiment, the word vector table stores the unique hot code of each participle and the corresponding relationship between the word vectors, so that when the word vectors corresponding to the semantic descriptions are extracted according to the word vector table, the semantic descriptions are participled in advance, and each participle is encoded by using the unique hot code, if the word quantities are large, the unique hot code corresponding to each participle is a very long vector, and therefore, in the embodiment of the invention, a word vector table is obtained through word2vec model training, so that the unique hot code of each participle is converted into a low-dimensional vector. It should be understood that when the word2vec model is trained, a large amount of semantic descriptions are divided into participles, each participle is subjected to one-hot coding, and a corresponding relation between the one-hot coding and a word vector is established through a semantic dictionary in the word2vec to obtain a word vector table. The word vectors in the word vector table have reduced dimensionality compared to the one-hot encoding, and at the same time, the word vectors include the association between each participle in a semantic description.

Step 220: and training the double-layer nested LSTM according to the multiple groups of training samples to obtain a semantic description model.

The double-layer nested LSTM is a circular memory neural network model, and an output result at the previous moment can be used as an input at the next moment. In the embodiment of the invention, the word vectors of the semantic description corresponding to one image have a sequence, so that the sequence between the semantic descriptions corresponding to each image is taken as a time sequence. This chronological order is to be understood as a time step, i.e. an order of occurrence before and after, and not as a physical time.

During training, only one group of training samples are input in each iteration process, visual features of the training samples are input into a first LSTM layer as an input initial state, then word vectors are input into a double-layer nested LSTM as the input of the double-layer nested LSTM, and the word vectors are sequentially input into the double-layer nested LSTM according to the time sequence among the word vectors in the training samples, so that the dependency relationship between the visual features and the word vectors is learned, and a first external state of each training sample group is output. It is worth noting that the first external state is the output result of the first LSTM layer, representing the dependency between the visual feature and the word vector. And inputting the first external state into the second LSTM layer as an input initial state of the second LSTM layer, and then continuously inputting each word vector according to the time sequence of the word vectors, wherein the first external state comprises the dependency relationship between the visual characteristics and the word vectors, so that the second LSTM layer can continuously learn the dependency relationship between the visual characteristics and the word vectors according to the first external state and output the second external state.

And the last layer of the double-layer nested LSTM layer is a linear layer, namely a full-connection layer, and is used for weighting the second external state to obtain the weighting result of each group of training samples, and performing classification and maximum output on the weighting result through a normalized exponential function softmax classifier to obtain the output result corresponding to each group of training samples. The output result is used for representing the semantic description of actual output, the semantic description of the actual output is the combination of a plurality of word vectors, the sequence of the word vectors may be consistent with the time sequence input in advance, or may not be consistent, and the difference between the two is described by the loss function value. The embodiments of the present invention do not limit the category of the loss function, for example, a square loss function, a cross entropy loss function, and the like. And updating the weight of the double-layer nested LSTM according to the loss function value until the loss function value is minimum. And taking the double-layer nested LSTM model with the minimum loss function value as a semantic description model.

In some embodiments, the two-layer nested LSTM includes an Attention layer disposed before the first LSTM layer, the visual features in each set of training samples being input into the Attention layer prior to the visual features being input into the first LSTM layer to determine a proportion of each of the visual features, and the dependency between each of the visual features and each of the word vectors being learned based on the proportion. And in the learning process, the dependence between the visual characteristics with high weight and each word vector is emphatically learned. For example, the background included in one graph has a low specific gravity and the human or animal has a high specific gravity, and the dependency between the human or animal visual features and the word vectors is learned with emphasis during learning. It should be appreciated that after the addition of the Attention layer, the Attention layer corresponds to one of the two-level nested LSTM, and thus, the specific gravity and the weight of the two-level nested LSTM are trained and updated simultaneously.

In this embodiment, the first LSTM layer and the second LSTM layer each include two nested LSTM units, and the training process of the first LSTM layer is further described by taking the first LSTM layer as an example.

The first LSTM layer comprises a first LSTM unit and a second LSTM unit, and the first LSTM unit is used for learning the dependency relationship between the visual characteristics and the word vectors in each group of training samples so as to output a first state; learning, by a second LSTM unit, a dependency between the first state of each set of training samples and the word vector to output a second state of each set of training samples; and combining the first state and the second state to obtain a first external state of each group of training samples. The output of the first LSTM unit is the input of the second LSTM unit, and the calculation process of each parameter is further described by taking the first LSTM unit as an example.

Fig. 3 shows a schematic structural diagram of the first LSTM unit, and as shown in fig. 4, the first LSTM unit includes a forgetting gate, an input gate, and an output gate. Learning, by the first LSTM layer, a dependency between the visual features and the word vectors in each set of training samples according to the following formula to output a first state for each set of training samples;

ft=σ(Wfxt+Ufht-1+bf)

it=σ(Wixt+Uiht-1+bi)

ot=σ(Woxt+Uoht-1+bo)

ht=ot⊙tanh(ct)

wherein f istIndicating forgetting gate, itDenotes an input gate, otRepresents an output gate, ctRepresents a status unit, htRepresents a first state, WiWeight matrix representing input gates, biIndicating an offset term of the input gate, WfWeight matrix representing forgetting gate, bfOffset term, W, representing a forgetting gateoWeight matrix representing output gates, boThe gate activation function representing the offset term of the output gate is sigmoid (σ), the value field is (0, 1), the activation function of the output is tanh function, and the product of the vector elements is indicated. H is to betAnd as the initial input of the second LSTM unit, continuing to sequentially input word vectors according to the time sequence among the word vectors, learning the dependency relationship between the visual characteristics and the word vectors, and outputting a second state. For a specific learning process, please refer to the learning process of the first LSTM unit, which is not described herein.

Step 230: and extracting the visual features of the image to be described by improving the Incepration-v 3 model.

Step 240: and inputting the visual characteristics of the image to be described into a semantic description model to obtain semantic description of the image to be described.

The description of step 230 to step 240 refers to the description of step 110 to step 120 in the first embodiment, and is not repeated here.

According to the embodiment of the invention, the semantic description model is obtained by training the double-layer nested LSTM model, and in the training process, the semantic description model comprises the corresponding relation between the visual characteristics of the sample image and the semantic description, so that the semantic description of the image to be described is extracted more reliably through the semantic description model.

Fig. 4 shows an image semantic description apparatus according to a third embodiment of the present invention. As shown in fig. 3, the apparatus includes: the extraction module 310 is configured to extract visual features of an image to be described through an improved inclusion-v 3 model, where the improved inclusion-v 3 model includes a plurality of nested convolutional layers, the plurality of nested convolutional layers are connected in parallel, and a last nested convolutional layer in the plurality of nested convolutional layers is connected in a residual error structural manner; the input module 320 is configured to input the visual features of the image to be described into a semantic description model, so as to obtain semantic description of the image to be described; the semantic description model is obtained by training double-layer nested LSTM through a plurality of groups of training samples, wherein each group of the plurality of groups of training samples comprises: the visual characteristics of the sample image and the semantic descriptor vector corresponding to the sample image.

In some embodiments, the apparatus further comprises: an obtaining module 330, configured to obtain multiple sets of training samples; and the training module 340 is configured to train a double-layer nested LSTM according to the set of training samples to obtain the semantic description model.

In some embodiments, the obtaining module 330 is further configured to:

acquiring a sample image and semantic description corresponding to the sample image;

extracting visual features of the sample image through the modified inclusion-v 3 model;

extracting a word vector corresponding to the semantic description according to a word vector table, wherein the word vector table is obtained by training a word2vec model;

and taking the visual features and the word vectors corresponding to the visual features as a group of training samples to obtain a plurality of groups of training samples.

In some embodiments, the obtaining module 330 is further configured to:

segmenting the semantic description and coding each segmented word by using unique hot coding;

and converting the code into a word vector through a word vector table, wherein the word vector table is obtained by training a word2vec model.

In some embodiments, the training module 340 is further configured to:

generating a time sequence according to the word vectors in each group of training samples;

inputting the visual features in each set of training samples into a first LSTM layer;

sequentially inputting the word vectors into the first LSTM layer according to the time sequence, so that the first LSTM layer learns the dependency relationship between the visual features and the word vectors in each group of training samples, and a first external state of each group of training samples is output;

inputting the first external state into a second LSTM layer, so that the second LSTM layer continuously learns the dependency relationship between the visual features and the word vectors in each training sample set according to the first external state to output a second external state of each training sample set;

weighting the second external state through a full connection layer to obtain a weighting result of each group of training samples;

performing classification maximization output on the weighting result through a normalization index function softmax classifier to obtain an output result corresponding to each group of training samples; calculating a loss function value according to the output result;

updating the weight of the double-layer nested LSTM according to the loss function value until the loss function value is minimum;

and taking the double-layer nested LSTM model with the minimum loss function value as the semantic description model.

In some embodiments, the two-layer nested LSTM includes an Attention layer, the training module 340 further to:

inputting the visual features into an Attention module to enable the Attention module to determine the proportion of each visual feature in all the visual features;

and sequentially inputting the word vectors into the first LSTM layer according to the time sequence, so that the first LSTM layer learns the dependency relationship between the visual features and the word vectors in each group of training samples according to the specific gravity to output a first external state of each group of training samples.

In some embodiments, the first LSTM layer includes first and second LSTM units, the training module 340 further to:

learning, by the first LSTM unit, a dependency between a visual feature and a word vector in each set of training samples to output a first state;

learning, by the second LSTM unit, a dependency between the first state of each set of training samples and the word vector to output a second state of each set of training samples; and combining the first state and the second state to obtain a first external state of each group of training samples.

In some embodiments, the first LSTM unit includes a forgetting gate, an input gate, and an output gate; the training module 340 is further configured to:

learning a dependency relationship between the visual features and the word vectors in each set of training samples according to the following formula to output a first state of each set of training samples;

ft=σ(Wfxt+Ufht-1+bf)

it=σ(Wixt+Uiht-1+bi)

ot=σ(Woxt+Uoht-1+bo)

ht=ot⊙tanh(ct)

wherein f istIndicating forgetting gate, itDenotes an input gate, otRepresents an output gate, ctRepresents a status unit, htRepresents a first state, WiWeight matrix representing input gates, biIndicating an offset term of the input gate, WfWeight matrix representing forgetting gate, bfOffset term, W, representing a forgetting gateoWeight matrix representing output gates, boBias term representing output gate, gate activation functionThe number is sigmoid (σ), the value field is (0, 1), the output activation function is tanh function, which indicates the vector element product.

In the embodiment of the invention, the extraction module 310 extracts the visual features of the image to be described through the improved inclusion-v 3 model, and obtains the semantic description of the image to be described through the semantic description model, thereby realizing the semantic description of the image to be described. The improved Inception-v3 model further improves the existing Inception-v3 model, so that the feature extraction effect of the Inception-v3 model is optimized, and the extracted visual features are more reliable; the semantic description model is obtained by training the training module 340 according to a large number of sample images and semantic descriptions corresponding to the sample images, so that the semantic description model includes the corresponding relationship between the visual features of the sample images and the semantic descriptions, and the semantic description of the image to be described according to the semantic description model is more accurate and reliable.

The embodiment of the invention provides a nonvolatile computer storage medium, wherein at least one executable instruction is stored in the computer storage medium, and the computer executable instruction can execute the operation corresponding to the image semantic description method in any method embodiment.

Fig. 5 is a schematic structural diagram of a computing device according to a fourth embodiment of the present invention, and the specific embodiment of the present invention does not limit the specific implementation of the computing device.

As shown in fig. 5, the computing device may include: a processor (processor)402, a Communications Interface 404, a memory 406, and a Communications bus 408.

Wherein: the processor 402, communication interface 404, and memory 406 communicate with each other via a communication bus 408. A communication interface 404 for communicating with network elements of other devices, such as clients or other servers. The processor 402 is configured to execute the program 410, and may specifically execute the relevant steps in the embodiment of the image semantic description method described above.

In particular, program 410 may include program code comprising computer operating instructions.

The processor 402 may be a central processing unit CPU or an application Specific Integrated circuit asic or one or more Integrated circuits configured to implement embodiments of the present invention. The computing device includes one or more processors, which may be the same type of processor, such as one or more CPUs; or may be different types of processors such as one or more CPUs and one or more ASICs.

And a memory 406 for storing a program 410. Memory 406 may comprise high-speed RAM memory, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.

The program 410 may be specifically configured to enable the processor 402 to execute steps 110 to 120 in fig. 1, steps 210 to 240 in fig. 2, and implement the functions of the modules 310 to 340 in fig. 4.

The algorithms or displays presented herein are not inherently related to any particular computer, virtual system, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing such a system will be apparent from the description above. In addition, embodiments of the present invention are not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.

In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.

Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the embodiments of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the invention and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.

Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.

Furthermore, those skilled in the art will appreciate that while some embodiments herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.

It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names. The steps in the above embodiments should not be construed as limiting the order of execution unless specified otherwise.

18页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种基于机器视觉的矿石识别方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!