Word vector generation method and device, terminal equipment and computer readable storage medium

文档序号:852477 发布日期:2021-03-16 浏览:8次 中文

阅读说明:本技术 字向量生成方法、装置、终端设备及计算机可读存储介质 (Word vector generation method and device, terminal equipment and computer readable storage medium ) 是由 熊为星 于 2020-12-07 设计创作,主要内容包括:本申请适用于终端技术领域,尤其涉及字向量生成方法、装置、终端设备及计算机可读存储介质。所述方法在需要生成目标字对应的目标字向量时,可以先确定目标字对应的初始字向量、图像特征向量、字根特征向量以及拼音特征向量。然后,可以根据目标字对应的初始字向量、图像特征向量、字根特征向量、拼音特征向量以及预设权重矩阵生成目标字对应的目标字向量。即通过结合文本信息、象形字图像信息、字根信息以及拼音信息来进行字向量的生成,使得所生成的字向量具有丰富的特征信息,可以充分体现字的属性特征,符合中文字的特性,以为后续的自然语言处理提供更可靠的字向量,提高自然语言处理的准确性,极大地扩展了自然语言处理的应用范围。(The application is applicable to the technical field of terminals, and particularly relates to a word vector generation method, a word vector generation device, terminal equipment and a computer-readable storage medium. When the target word vector corresponding to the target word needs to be generated, the method can firstly determine the initial word vector, the image characteristic vector, the etymon characteristic vector and the pinyin characteristic vector corresponding to the target word. Then, a target word vector corresponding to the target word can be generated according to the initial word vector, the image feature vector, the etymon feature vector, the pinyin feature vector and the preset weight matrix corresponding to the target word. The character vector is generated by combining the text information, the pictographic digital image information, the etymon information and the pinyin information, so that the generated character vector has rich characteristic information, the attribute characteristics of the character can be fully embodied, the character of the Chinese character is met, a more reliable character vector is provided for subsequent natural language processing, the accuracy of the natural language processing is improved, and the application range of the natural language processing is greatly expanded.)

1. A method of generating a word vector, comprising:

acquiring a target word and determining an initial word vector corresponding to the target word;

determining an image characteristic vector corresponding to the target word, determining a etymon characteristic vector corresponding to the target word, and determining a pinyin characteristic vector corresponding to the target word;

and generating a target word vector corresponding to the target word according to the initial word vector, the image characteristic vector, the etymon characteristic vector, the pinyin characteristic vector and a preset weight matrix corresponding to the target word.

2. The word vector generation method of claim 1, prior to said determining the image feature vector to which the target word corresponds, comprising:

constructing a word table, wherein the word table comprises a plurality of preset words;

aiming at each preset character, acquiring a pictographic image corresponding to the preset character;

and constructing an image characteristic vector corresponding to the preset word according to the pictographic image corresponding to the preset word.

3. The word vector generation method of claim 2, wherein the pictographic image corresponding to the preset word comprises a plurality of pictographic images, and the constructing the image feature vector corresponding to the preset word according to the pictographic image corresponding to the preset word comprises:

respectively inputting a plurality of pictographic images corresponding to the preset characters into a preset image recognition model, and acquiring each initial image feature vector extracted by a target network layer of the image recognition model, wherein the target network layer is the last layer of network of the image recognition model;

and performing mean value calculation on the initial image feature vectors, and determining the mean value image feature vector obtained by calculation as the image feature vector corresponding to the preset word.

4. The word vector generation method of claim 2, comprising, after the building a word table:

obtaining a basic etymon and a preset etymon corresponding to each preset character, and constructing a etymon feature vector corresponding to each preset character according to the basic etymon and the preset etymon corresponding to each preset character.

5. The word vector generating method of claim 1, wherein before generating the target word vector corresponding to the target word according to the initial word vector, the image feature vector, the etymon feature vector, the pinyin feature vector and the predetermined weight matrix corresponding to the target word, the method comprises:

acquiring a training text, and splitting the training text to obtain each training word;

determining a central training word, and acquiring an initial word vector corresponding to the central training word and a related training word corresponding to the central training word, wherein the central training word is any one of the training words;

acquiring an initial word vector, an image characteristic vector, a etymon characteristic vector and a pinyin characteristic vector corresponding to the related training words, and combining the initial word vector, the image characteristic vector, the etymon characteristic vector and the pinyin characteristic vector corresponding to the related training words to obtain a first training vector corresponding to the related training words;

inputting the first training vector into a first word vector model for processing to obtain a first training result output by the first word vector model;

determining a first training error of the first word vector model according to the first training result and an initial word vector corresponding to the central training word;

when the first training error does not meet a first preset condition, adjusting a first model parameter of the first word vector model, returning to execute the step of obtaining a training text, and performing splitting processing on the training text to obtain each training word and subsequent steps, wherein the first model parameter comprises a preset weight matrix, and the preset weight matrix is a weight matrix between an input layer and a hidden layer of the first word vector model;

and when the first training error meets the first preset condition, obtaining the preset weight matrix.

6. The word vector generating method of claim 1, wherein before generating the target word vector corresponding to the target word according to the initial word vector, the image feature vector, the etymon feature vector, the pinyin feature vector and the predetermined weight matrix corresponding to the target word, the method comprises:

acquiring a training text, and splitting the training text to obtain each training word;

determining a central training word, and acquiring a related training word corresponding to the central training word and an initial word vector corresponding to the related training word, wherein the central training word is any one of the training words;

acquiring an initial word vector, an image characteristic vector, a etymon characteristic vector and a pinyin characteristic vector corresponding to the central training word, and combining the initial word vector, the image characteristic vector, the etymon characteristic vector and the pinyin characteristic vector corresponding to the central training word to obtain a second training vector corresponding to the central training word;

inputting the second training vector into a second word vector model for processing to obtain a second training result output by the second word vector model;

determining a second training error of the second word vector model according to the second training result and the initial word vector corresponding to the related training word;

when the second training error does not meet a second preset condition, adjusting second model parameters of the second word vector model, returning to execute the step of obtaining a training text, and performing splitting processing on the training text to obtain each training word and subsequent steps, wherein the second model parameters comprise a preset weight matrix, and the preset weight matrix is a weight matrix between an input layer and a hidden layer of the second word vector model;

and when the second training error meets the second preset condition, obtaining the preset weight matrix.

7. The word vector generation method of any one of claims 1 to 6, wherein generating the target word vector corresponding to the target word according to the initial word vector, the image feature vector, the etymon feature vector, the pinyin feature vector and the predetermined weight matrix corresponding to the target word comprises:

combining the initial word vector, the image characteristic vector, the etymon characteristic vector and the pinyin characteristic vector corresponding to the target word to obtain a combined vector corresponding to the target word;

and multiplying the combined vector by the preset weight matrix to obtain a target word vector corresponding to the target word.

8. A word vector generation apparatus, comprising:

the target word acquisition module is used for acquiring a target word and determining an initial word vector corresponding to the target word;

the characteristic vector determining module is used for determining an image characteristic vector corresponding to the target word, determining a etymon characteristic vector corresponding to the target word and determining a pinyin characteristic vector corresponding to the target word;

and the word vector generating module is used for generating a target word vector corresponding to the target word according to the initial word vector, the image characteristic vector, the etymon characteristic vector, the pinyin characteristic vector and the preset weight matrix corresponding to the target word.

9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the word vector generation method according to any one of claims 1 to 7 when executing the computer program.

10. A computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, implements the word vector generation method according to any one of claims 1 to 7.

Technical Field

The present application belongs to the field of terminal technologies, and in particular, to a method and an apparatus for generating a word vector, a terminal device, and a computer-readable storage medium.

Background

In natural language processing, in order for a computer to understand the meaning of a word, the word needs to be converted into a word vector. A word vector is a vectorized representation of a word. Currently, word vectors corresponding to each word are constructed mainly based on english processing techniques. Due to the natural difference between Chinese and English, the word vector generated by the English-based processing technology cannot well reflect the attribute characteristics of the word, so that the accuracy of natural language processing is low, and the wide application of the natural language processing technology is influenced.

Disclosure of Invention

The embodiment of the application provides a word vector generation method, a word vector generation device, a terminal device and a computer readable storage medium, which can enrich feature information of a word vector, so that the word vector can fully embody attribute features of a word, and accuracy of natural language processing is improved.

In a first aspect, an embodiment of the present application provides a word vector generation method, including:

acquiring a target word and determining an initial word vector corresponding to the target word;

determining an image characteristic vector corresponding to the target word, determining a etymon characteristic vector corresponding to the target word, and determining a pinyin characteristic vector corresponding to the target word;

and generating a target word vector corresponding to the target word according to the initial word vector, the image characteristic vector, the etymon characteristic vector, the pinyin characteristic vector and a preset weight matrix corresponding to the target word.

Illustratively, before the determining the image feature vector corresponding to the target word, the method includes:

constructing a word table, wherein the word table comprises a plurality of preset words;

aiming at each preset character, acquiring a pictographic image corresponding to the preset character;

and constructing an image characteristic vector corresponding to the preset word according to the pictographic image corresponding to the preset word.

Specifically, the pictographic image corresponding to the preset word includes a plurality of pictographic images, and the constructing the image feature vector corresponding to the preset word according to the pictographic image corresponding to the preset word includes:

respectively inputting a plurality of pictographic images corresponding to the preset characters into a preset image recognition model, and acquiring each initial image feature vector extracted by a target network layer of the image recognition model, wherein the target network layer is the last layer of network of the image recognition model;

and performing mean value calculation on the initial image feature vectors, and determining the mean value image feature vector obtained by calculation as the image feature vector corresponding to the preset word.

Optionally, after the building the word table, the method further comprises:

obtaining a basic etymon and a preset etymon corresponding to each preset character, and constructing a etymon feature vector corresponding to each preset character according to the basic etymon and the preset etymon corresponding to each preset character.

In a possible implementation manner of the first aspect, before generating a target word vector corresponding to the target word according to an initial word vector, an image feature vector, a etymon feature vector, a pinyin feature vector, and a preset weight matrix corresponding to the target word, the method includes:

acquiring a training text, and splitting the training text to obtain each training word;

determining a central training word, and acquiring an initial word vector corresponding to the central training word and a related training word corresponding to the central training word, wherein the central training word is any one of the training words;

acquiring an initial word vector, an image characteristic vector, a etymon characteristic vector and a pinyin characteristic vector corresponding to the related training words, and combining the initial word vector, the image characteristic vector, the etymon characteristic vector and the pinyin characteristic vector corresponding to the related training words to obtain a first training vector corresponding to the related training words;

inputting the first training vector into a first word vector model for processing to obtain a first training result output by the first word vector model;

determining a first training error of the first word vector model according to the first training result and an initial word vector corresponding to the central training word;

when the first training error does not meet a first preset condition, adjusting a first model parameter of the first word vector model, returning to execute the step of obtaining a training text, and performing splitting processing on the training text to obtain each training word and subsequent steps, wherein the first model parameter comprises a preset weight matrix, and the preset weight matrix is a weight matrix between an input layer and a hidden layer of the first word vector model;

and when the first training error meets the first preset condition, obtaining the preset weight matrix.

In another possible implementation manner of the first aspect, before the generating a target word vector corresponding to the target word according to the initial word vector, the image feature vector, the etymon feature vector, the pinyin feature vector, and the preset weight matrix corresponding to the target word, the method includes:

acquiring a training text, and splitting the training text to obtain each training word;

determining a central training word, and acquiring a related training word corresponding to the central training word and an initial word vector corresponding to the related training word, wherein the central training word is any one of the training words;

acquiring an initial word vector, an image characteristic vector, a etymon characteristic vector and a pinyin characteristic vector corresponding to the central training word, and combining the initial word vector, the image characteristic vector, the etymon characteristic vector and the pinyin characteristic vector corresponding to the central training word to obtain a second training vector corresponding to the central training word;

inputting the second training vector into a second word vector model for processing to obtain a second training result output by the second word vector model;

determining a second training error of the second word vector model according to the second training result and the initial word vector corresponding to the related training word;

when the second training error does not meet a second preset condition, adjusting second model parameters of the second word vector model, returning to execute the step of obtaining a training text, and performing splitting processing on the training text to obtain each training word and subsequent steps, wherein the second model parameters comprise a preset weight matrix, and the preset weight matrix is a weight matrix between an input layer and a hidden layer of the second word vector model;

and when the second training error meets the second preset condition, obtaining the preset weight matrix.

Optionally, the generating a target word vector corresponding to the target word according to the initial word vector, the image feature vector, the etymon feature vector, the pinyin feature vector and the preset weight matrix corresponding to the target word includes:

combining the initial word vector, the image characteristic vector, the etymon characteristic vector and the pinyin characteristic vector corresponding to the target word to obtain a combined vector corresponding to the target word;

and multiplying the combined vector by the preset weight matrix to obtain a target word vector corresponding to the target word.

In a second aspect, an embodiment of the present application provides a word vector generating apparatus, including:

the target word acquisition module is used for acquiring a target word and determining an initial word vector corresponding to the target word;

the characteristic vector determining module is used for determining an image characteristic vector corresponding to the target word, determining a etymon characteristic vector corresponding to the target word and determining a pinyin characteristic vector corresponding to the target word;

and the word vector generating module is used for generating a target word vector corresponding to the target word according to the initial word vector, the image characteristic vector, the etymon characteristic vector, the pinyin characteristic vector and the preset weight matrix corresponding to the target word.

In a third aspect, an embodiment of the present application provides a terminal device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the word vector generation method described in any one of the above first aspects when executing the computer program.

In a fourth aspect, the present application provides a computer-readable storage medium, where a computer program is stored, and the computer program, when executed by a processor, implements the word vector generation method according to any one of the above first aspects.

In a fifth aspect, an embodiment of the present application provides a computer program product, which, when run on a terminal device, causes the terminal device to execute the word vector generation method according to any one of the above first aspects.

It is understood that the beneficial effects of the second aspect to the fifth aspect can be referred to the related description of the first aspect, and are not described herein again.

Compared with the prior art, the embodiment of the application has the advantages that:

in the embodiment of the application, when a target word vector corresponding to a target word needs to be generated, an initial word vector, an image feature vector, a etymon feature vector and a pinyin feature vector corresponding to the target word can be determined first. Then, a target word vector corresponding to the target word can be generated according to the initial word vector, the image feature vector, the etymon feature vector, the pinyin feature vector and the preset weight matrix corresponding to the target word. The embodiment of the application generates the word vector by combining the text information, the pictographic digital image information, the etymon information and the pinyin information, so that the generated word vector has rich characteristic information, can fully embody the attribute characteristics of the word and accord with the characteristics of the Chinese characters, a more reliable word vector is provided for subsequent natural language processing, the accuracy of the natural language processing is improved, and the application range of the natural language processing is greatly expanded.

Drawings

Fig. 1 is a schematic flow chart of a word vector generation method provided by an embodiment of the present application;

FIG. 2 is a block diagram of a first word vector model according to an embodiment of the present application;

FIG. 3 is a schematic flow chart diagram of training a first word vector model according to an embodiment of the present application;

FIG. 4 is a block diagram of a second word vector model according to an embodiment of the present application;

FIG. 5 is a schematic flow chart diagram of training a second word vector model according to another embodiment of the present application;

fig. 6 is a schematic structural diagram of a word vector generating apparatus according to an embodiment of the present application;

fig. 7 is a schematic structural diagram of a terminal device according to an embodiment of the present application.

Detailed Description

It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.

It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.

As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".

Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.

Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.

The word vector generation method provided by the embodiment of the application can be applied to a terminal device, and the terminal device can be a mobile phone, a tablet computer, a wearable device, a vehicle-mounted device, an Augmented Reality (AR)/Virtual Reality (VR) device, a notebook computer, a super-mobile personal computer (UMPC), a netbook, a Personal Digital Assistant (PDA), a cloud server, and the like.

Referring to fig. 1, fig. 1 is a schematic flow chart illustrating a word vector generation method according to an embodiment of the present application. As shown in fig. 1, the word vector generation method may include:

s101, obtaining target words and determining initial word vectors corresponding to the target words.

The target word may be any word in a certain input text, for example, any word in a text to be classified in a text classification task; for example, the matching method can be any word in the text to be matched in the semantic matching task.

In this embodiment of the present application, the initial word vector corresponding to the target word may be a one-hot code corresponding to the target word. The dimension of the one-hot encoding can be determined according to the number V of words of the word table. For example, when the word number V of the word table is 6 and the target word is located at the 3 rd of the word table, the initial word vector corresponding to the target word may be [0, 0, 1, 0, 0, 0 ].

S102, determining an image characteristic vector corresponding to the target word, determining a etymon characteristic vector corresponding to the target word, and determining a pinyin characteristic vector corresponding to the target word.

It should be understood that English is composed of letters and Chinese is composed of Chinese characters, i.e., English is phonographic and Chinese is ideographic, and Chinese ideographic functions benefit from the pictographic character characteristics of Chinese. Therefore, the embodiment of the application can combine the pictographic image of the target word to generate the word vector, so as to ensure that the generated word vector conforms to the Chinese character.

In natural language processing, the reason why preprocessing operations such as morphological restoration and stem extraction are required for english is that english words have abundant morphological changes, including: singular or plural, active or passive, temporal variations, affixes, etc. The Chinese has no change of word form like English, but the Chinese has a concept similar to the change of word stem-the Chinese radicals, such as Chinese characters of monkey, dog, pig, cat and wolf, which are obviously animal nouns. Therefore, the embodiment of the application can also combine radical information based on the components and pinyin information to generate the word vector, so as to ensure that the generated word vector conforms to the characteristics of Chinese.

In the embodiment of the application, the word table can be constructed in advance, and the image characteristic vector, the etymon characteristic vector and the pinyin characteristic vector corresponding to each preset word in the word table can be constructed. The target word may be any preset word in the word table. Therefore, after the target word is obtained, the image feature vector, the etymon feature vector and the pinyin feature vector corresponding to the target word can be directly obtained from the image feature vector, the etymon feature vector and the pinyin feature vector which are constructed in advance.

Specifically, a data source can be acquired from the Chinese open data set of wikipedia, punctuation and stop words are removed from the acquired data source, and a plurality of characters are obtained by splitting. And then, V preset words with the word frequency larger than or equal to the preset value in the plurality of words are obtained, and a word table is constructed according to the V preset words. For example, the word table may be constructed according to the order of the word frequency of the V preset words from high to low, that is, in the word table, the preset words with higher word frequency are ranked farther forward, and the preset words with lower word frequency are ranked farther backward. Assuming that the word frequency of word a is 5 and the word frequency of word B is 6, word a may be the 8 th word and word B may be the 7 th word in the word table. Or, a word table may be constructed according to the order of the word frequency of the V preset words from low to high, that is, in the word table, the lower the word frequency of the preset words, the earlier the order is, the higher the word frequency of the preset words, the later the order is.

Where word frequency refers to the total number of times the word occurs in all data sources. The preset value can be set according to specific situations, for example, the preset value can be set to 3. I.e. a predetermined word in the word table may be a word having a total number of occurrences in the data source greater than or equal to 3.

The following first describes a process of constructing an image feature vector corresponding to any preset word.

After the word list is constructed, for each preset word in the word list, the pictographic image corresponding to the preset word can be obtained. Then, an image feature vector corresponding to the preset word can be constructed according to the pictographic image corresponding to the preset word. The font of the pictograph can comprise oracle script, golden script, seal script, clerical script, regular script and the like. Pictographic images are images of characters written in corresponding fonts.

Specifically, the constructing an image feature vector corresponding to the preset word according to the pictographic image corresponding to the preset word may include:

step a, respectively inputting a plurality of pictographic images corresponding to the preset characters into a preset image recognition model, and acquiring each initial image feature vector extracted by a target network layer of the image recognition model, wherein the target network layer is the last layer of network of the image recognition model;

in the embodiment of the application, the pictographic image corresponding to the preset word may be obtained through image search, specifically, the preset word and each font may be respectively input into an image search frame of a preset search engine to search out the pictographic image corresponding to the preset word, for example, "jawbone" may be input into the image search frame of the preset search engine to search out an oracle image corresponding to "fish", where the oracle image refers to an image in which the preset word is written in oracle, and subsequent golden text image, small seal character image, clerical script image, and regular script image are similar. Then, the searched pictographic image can be downloaded to obtain at least one of an oracle bone image, a golden character image, a small seal character image, an clerical script image, a regular script image and the like corresponding to the preset character. Here, automated downloading of images may be performed by a crawler. Because the pixels of the downloaded pictographic images are of different sizes, the pictographic images can be cropped to a uniform pixel, for example, to 224 x 224, for the purpose of facilitating the extraction of image features. Finally, each clipped pictographic image can be respectively input into a preset image recognition model, and the characteristic vector extracted by the target network layer of the image recognition model is determined as the initial image characteristic vector corresponding to the pictographic image.

It should be noted that, since not all the preset words can obtain the pictographic images corresponding to the five fonts, the number of the initial image feature vectors corresponding to each preset word may be 1, 2, 3, 4, or 5.

The preset image recognition model may be an image recognition model based on a VGG16 network structure. The target network layer of the image recognition model is the last layer of the VGG16 network structure, and the last layer of the VGG16 network structure may be a fully connected layer, which may have 1000 hidden nodes, so that the initial image feature vector corresponding to each pictographic image may be a vector of 1 x 1000 dimensions.

And b, performing mean value calculation on the initial image feature vectors, and determining the mean value image feature vector obtained by calculation as the image feature vector corresponding to the preset word.

Specifically, the values corresponding to the same dimensionality in each initial image feature vector corresponding to the preset word may be added, the sum of the additions is divided by the number of the initial image feature vectors corresponding to the preset word, respectively, to obtain an average value corresponding to each dimensionality, and the image feature vector corresponding to the preset word is determined from the average value image feature vector formed by the average values corresponding to each dimensionality.

For example, when a preset word only includes a seal character image, an clerical script image and a regular script image, the number of the initial image feature vectors corresponding to the preset word is 3, assuming that the initial image feature vector corresponding to the seal character image is [2, 3, 5, 6, … …, 6], the initial image feature vector corresponding to the clerical script image is [3, 3, 2, 4, … …, 4], the initial image feature vector corresponding to the regular script image is [1, 3, 2, 2, … …, 5], and the image feature vector corresponding to the preset word may be [2, 3, 3, 4, … …, 5 ].

It is understood that when the initial image feature vector corresponding to the predetermined word is only one, for example, when the predetermined word only has a regular script image, the initial image feature vector may be directly determined as the image feature vector corresponding to the predetermined word.

It should be noted that the image recognition model described above is only schematically explained based on the network structure of VGG16, and should not be construed as a limitation to the embodiment of the present application, and the embodiment of the present application may also use a network structure with higher precision to construct the image recognition model.

The following describes a process of constructing a radical feature vector corresponding to any preset word.

It should be understood that the five-stroke etymons of the Chinese character are composed of 271 basic etymons, that is, any preset character can be composed of the 271 basic etymons. The basic etymons can be Chinese characters, the radicals of the Chinese characters, parts of the radicals, even strokes and the like. Therefore, the preset characters can be disassembled to obtain the preset etymons corresponding to the preset characters. Any one of the preset etymons is any one of the basic etymons. Then, a radical feature vector corresponding to the preset word may be constructed according to the basic radicals and the preset radicals corresponding to the preset word.

The radical feature vector may be a vector of 1 x 271 dimensions, each of which represents a base radical. In this embodiment of the application, the occurrence frequency of each preset etymon corresponding to the preset word may be counted first, and the value of the dimension in which the preset etymon is located is determined according to the occurrence frequency, and the dimension in which the basic etymon that does not occur in the preset word is located may be represented by 0. For example, when the preset radicals corresponding to a preset word are the first basic radical, the third basic radical and the sixth basic radical, and the occurrence frequency of the first basic radical is 2, the occurrence frequency of the third basic radical is 1, and the occurrence frequency of the sixth basic radical is 4, then the radical feature vector corresponding to the preset word may be [2, 0, 1, 0, 0, 4, 0, … …, 0 ].

The following describes a process of constructing a pinyin feature vector corresponding to any preset word.

It should be understood that the pinyin for any one predetermined word may be represented by one or more of the 26 english letters, and 4 tones. Thus, the pinyin feature vector may be a 1 x 30-dimensional vector, in which the first 26 of the vectors may represent 26 english letters (e.g., the 1 st may represent a, the 2 nd may represent b, the 3 rd may represent c, … …, and the 26 th may represent z), and the last 4 may represent tones (e.g., the 27 th may represent the first sound, the 28 th may represent the second sound, the 29 th may represent the third sound, and the 30 th may represent the fourth sound). Therefore, the pinyin feature vector corresponding to the preset word can be constructed by acquiring the pinyin of the preset word. For example, the pinyin for "fish" is y u, the pinyin feature vector corresponding to the fish may be expressed as: [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1,0,0,1,0,0].

It should be noted that, the image feature vector, the etymon feature vector, and the pinyin feature vector corresponding to each preset word in the word table are constructed in advance, and then the target word is matched with the preset word to obtain the image feature vector, the etymon feature vector, and the pinyin feature vector corresponding to the target word, which are only schematically explained and should not be understood as limitations to the embodiments of the present application.

S103, generating a target word vector corresponding to the target word according to the initial word vector, the image characteristic vector, the etymon characteristic vector, the pinyin characteristic vector and the preset weight matrix corresponding to the target word.

In the embodiment of the application, after the initial word vector, the image characteristic vector, the etymon characteristic vector and the pinyin characteristic vector corresponding to the target word are obtained, the target word vector corresponding to the target word can be generated according to the initial word vector, the image characteristic vector, the etymon characteristic vector, the pinyin characteristic vector and the preset weight matrix corresponding to the target word.

The process of acquiring the preset weight matrix is described below.

In one example, the predetermined weight matrix may be derived by training a first word vector model with training text. The first word vector model may be a continuous bag-of-words model (CBOW). Training the first word vector model refers to a process of adjusting and optimizing first model parameters of the first word vector model. The first model parameters of the first word vector model may include a weight matrix between the input layer and the hidden layer, and a weight matrix between the hidden layer and the output layer. The preset weight matrix may be a weight matrix between the input layer and the hidden layer.

Referring to fig. 2, fig. 2 is a schematic diagram illustrating a structure of a first word vector model. As shown in fig. 2, the first word vector model may include an input layer 201, a hidden layer 202, and an output layer 203. The dimension of the weight matrix between the input layer 201 and the hidden layer 202 may be (V +1000+271+30) × N, where N is the number of neurons in the hidden layer 202. N may be determined from the number V of words in the word table. Specifically, when V is large, N may be relatively large, and when V is small, N may be relatively small. The dimension of the weight matrix between the hidden layer 202 and the output layer 203 may be N x V.

The input layer 201 is used to input a first training vector corresponding to a word of a context. The hidden layer 202 is configured to process each first training vector to obtain an intermediate training vector, and transmit the intermediate training vector to the output layer 203. Specifically, the hidden layer 202 may multiply each first training vector by a preset weight matrix (i.e., a weight matrix between the input layer 201 and the hidden layer 202), add the multiplied first training vectors to obtain an intermediate training vector, and transmit the intermediate training vector to the output layer 203. The output layer 203 multiplies the intermediate training vector by the weight matrix between the hidden layer 202 and the output layer 203 to obtain a first training result which is finally output, wherein the first training result is a vector with a dimension of 1 XV.

Referring to fig. 3, fig. 3 is a schematic flow chart illustrating training of a first word vector model to obtain a predetermined weight matrix. As shown in fig. 3, before the generating the target word vector corresponding to the target word according to the initial word vector, the image feature vector, the etymon feature vector, the pinyin feature vector, and the preset weight matrix corresponding to the target word, the method may include:

s301, obtaining a training text, and splitting the training text to obtain each training word.

S302, determining a central training word, and acquiring an initial word vector corresponding to the central training word and a related training word corresponding to the central training word, wherein the central training word is any one of the training words.

For S301 and S302, a training text may be a sentence. The related training characters corresponding to the central training character refer to characters corresponding to the context of the central training character in the training text. The number of the relevant training words corresponding to the central training word can be set by user. Specifically, the number of the relevant training words may be set by setting the number of words on one side of the center training word. For example, when the number of characters on one side of the center training character is set to 2, four characters in total can be determined as the relevant training characters corresponding to the center training character by two characters on the left side of the center training character and two characters on the right side of the center training character. For example, when the number of characters on one side of the center training word is set to 1, two characters in total can be determined as the related training words corresponding to the center training word, namely, one character on the left side of the center training word and one character on the right side of the center training word.

For example, each training word in the training text may be determined as a central training word, respectively, to perform the training of CBOW. For example, for each training text, one or more training words in the training text may be determined as center training words to perform the training of CBOW.

The initial word vector corresponding to the central training word may be a one-hot code corresponding to the central training word. The dimension of one-hot encoding can be determined according to the word number V of the word table, i.e. the initial word vector corresponding to the central training word can be a vector with dimension of 1 x V.

S303, obtaining the initial word vector, the image characteristic vector, the etymon characteristic vector and the pinyin characteristic vector corresponding to the related training words, and combining the initial word vector, the image characteristic vector, the etymon characteristic vector and the pinyin characteristic vector corresponding to the related training words to obtain the first training vector corresponding to the related training words.

S304, inputting the first training vector into a first word vector model for processing to obtain a first training result output by the first word vector model.

And the initial word vector corresponding to the related training word can also be a one-hot code corresponding to the related training word. The dimension of one-hot encoding can be determined according to the word number V of the word table, i.e. the initial word vector corresponding to each related training word can be a vector with dimension of 1 × V. Each related training word is generally any preset word in the word list, so that in the embodiment of the application, the image characteristic vector, the etymon characteristic vector and the pinyin characteristic vector corresponding to each related training word can be directly obtained.

After an initial word vector, an image feature vector, a radical feature vector and a pinyin feature vector corresponding to any one related training word are obtained, the initial word vector, the image feature vector, the radical feature vector and the pinyin feature vector corresponding to the related training word can be spliced to obtain a first training vector corresponding to the related training word, wherein the first training vector corresponding to the related training word can be a vector of 1 (V +1000+271+ 30).

It is understood that the CBOW may predict the center training word corresponding to a plurality of related training words from the plurality of related training words. In this embodiment, the first training vectors corresponding to the plurality of related training words corresponding to the central training word may be input to the input layer 201 of the first word vector model (i.e., CBOW), respectively. The input layer 201 of the first word vector model may transmit the first training vectors corresponding to the respective related training words to the hidden layer 202. The hidden layer 202 may multiply each first training vector by a predetermined weight matrix (i.e., a weight matrix between the input layer 201 and the hidden layer 202), and then add the multiplied first training vectors to obtain an intermediate training vector, and transmit the intermediate training vector to the output layer 203. The output layer 203 multiplies the intermediate training vector by the weight matrix between the hidden layer 202 and the output layer 203 to obtain a first training result which is finally output, wherein the first training result is a vector with a dimension of 1 XV.

S305, determining a first training error of the first word vector model according to the first training result and the initial word vector corresponding to the central training word.

S306, judging whether the first training error meets a first preset condition.

S307, when the first training error does not meet a first preset condition, adjusting a first model parameter of the first word vector model, returning to execute the step of obtaining the training text, and performing splitting processing on the training text to obtain each training word and subsequent steps, wherein the first model parameter comprises a preset weight matrix, and the preset weight matrix is a weight matrix between an input layer and a hidden layer of the first word vector model.

And S308, when the first training error meets the first preset condition, obtaining the preset weight matrix.

For S305 to S308, after obtaining the first training result output by the first word vector model, the first training result may be compared with the initial word vector corresponding to the central training word, so as to determine the first training error of the first word vector model. In particular, cross entropy may be used as a cost function. That is, the cross entropy between the first training result and the initial word vector corresponding to the center training word can be used as the first training error of the first word vector model.

It should be understood that when the first training error does not satisfy the first preset condition, the first model parameters of the first word vector model, i.e., the weight matrix between the input layer 201 and the hidden layer 202 and the weight matrix between the hidden layer 202 and the output layer 203, may be updated by a gradient descent method. The first preset condition may be that the first training error is less than a specified value. The specified value may be determined on a case-by-case basis. And then, training the first word vector model through the training text until the first training error meets a first preset condition to obtain the trained first word vector model, thereby obtaining a preset weight matrix.

In another example, the preset weight matrix may be derived by training a second word vector model with training text. Wherein the second word vector model may be a Skip-Gram model. Training the second word vector model refers to a process of adjusting and optimizing second model parameters of the second word vector model. The second model parameters of the second word vector model may include a weight matrix between the input layer and the hidden layer, and a weight matrix between the hidden layer and the output layer. The preset weight matrix may be a weight matrix between the input layer and the hidden layer.

Referring to fig. 4, fig. 4 is a schematic diagram illustrating a structure of a second word vector model. As shown in fig. 4, the second word vector model may include an input layer 401, a hidden layer 402, and an output layer 403. The dimension of the weight matrix between the input layer 401 and the hidden layer 402 may be (V +1000+271+30) × N, where N is the number of neurons in the hidden layer 402. N may be determined from the number V of words in the word table. Specifically, when V is large, N may be relatively large, and when V is small, N may be relatively small. The dimension of the weight matrix between the hidden layer 402 and the output layer 403 may be N x V.

The input layer 401 is used for inputting a second training vector corresponding to the center training word. The hidden layer 402 is configured to process the second training vector to obtain an intermediate training vector, and transmit the intermediate training vector to the output layer 402. Specifically, the hidden layer 402 may multiply the second training vector by a preset weight matrix (i.e., a weight matrix between the input layer 401 and the hidden layer 402) to obtain an intermediate training vector, and transmit the intermediate training vector to the output layer 403. The output layer 403 multiplies the intermediate training vector by the weight matrix between the hidden layer 402 and the output layer 403 to obtain the final output second training results, which are 1 x V-dimensional vectors.

Referring to fig. 5, fig. 5 is a schematic flow chart illustrating training of a second word vector model to obtain a predetermined weight matrix. As shown in fig. 5, before the generating the target word vector corresponding to the target word according to the initial word vector, the image feature vector, the etymon feature vector, the pinyin feature vector, and the preset weight matrix corresponding to the target word, the method may include:

s501, obtaining a training text, and splitting the training text to obtain each training word.

S502, determining a central training word, and acquiring a related training word corresponding to the central training word and an initial word vector corresponding to the related training word, wherein the central training word is any one of the training words.

S501 is similar to S301, and S502 is similar to S302, and the basic principle is the same, and for brevity, the description is omitted here.

S503, obtaining an initial word vector, an image characteristic vector, a etymon characteristic vector and a pinyin characteristic vector corresponding to the central training word, and combining the initial word vector, the image characteristic vector, the etymon characteristic vector and the pinyin characteristic vector corresponding to the central training word to obtain a second training vector corresponding to the central training word.

S504, inputting the second training vector to a second word vector model for processing to obtain a second training result output by the second word vector model.

And the initial word vector corresponding to the central training word can also be a one-hot code corresponding to the central training word. Therefore, in the embodiment of the present application, the image feature vector, the etymon feature vector and the pinyin feature vector corresponding to the central training word may be directly obtained, and then the initial word vector, the image feature vector, the etymon feature vector and the pinyin feature vector corresponding to the central training word may be spliced to obtain the second training vector corresponding to the central training word, where the second training vector may be a vector of 1 × (V +1000+271+ 30).

It is understood that the Skip-Gram model can predict a plurality of related training words corresponding to a center training word according to the center training word.

In this embodiment, the second training vector corresponding to the center training word may be input to the input layer 401 of the second word vector model (i.e., Skip-Gram model). The input layer 401 of the second word vector model may pass the second training vector corresponding to the center training word to the hidden layer 402. The hidden layer 402 may multiply the second training vector by a preset weight matrix (i.e., a weight matrix between the input layer 401 and the hidden layer 402) to obtain an intermediate training vector, and transmit the intermediate training vector to the output layer 403. The output layer 403 multiplies the intermediate training vector by the weight matrix between the hidden layer 402 and the output layer 403 to obtain each of the second training results that are finally output, and each of the second training results is a 1 x V-dimensional vector.

And S505, determining a second training error of the second word vector model according to the second training result and the initial word vector corresponding to the related training word.

S506, judging whether the second training error meets a second preset condition.

And S507, when the second training error does not meet a second preset condition, adjusting second model parameters of the second word vector model, returning to execute the step of obtaining the training text, and splitting the training text to obtain each training word and subsequent steps, wherein the second model parameters comprise a preset weight matrix, and the preset weight matrix is a weight matrix between an input layer and a hidden layer of the second word vector model.

And S508, when the second training error meets the second preset condition, obtaining the preset weight matrix.

For S505 to S508, after obtaining each second training result output by the second word vector model, each second training result may be compared with the initial word vector of each related training word corresponding to the central training word, so as to determine a second training error of the second word vector model. When the second training error does not satisfy the second preset condition, the second model parameters of the second word vector model, that is, the weight matrix between the input layer 401 and the hidden layer 402 and the weight matrix between the hidden layer 402 and the output layer 403 may be updated by a gradient descent method. The second preset condition may be that the second training error is less than a specified value. The specified value may be determined on a case-by-case basis. And then, training the second word vector model through the training text until the second training error meets a second preset condition to obtain the trained second word vector model, thereby obtaining a preset weight matrix.

In the embodiment of the present application, the target word vector corresponding to the target word is generated according to the initial word vector, the image feature vector, the etymon feature vector, the pinyin feature vector and the preset weight matrix corresponding to the target word, which may be obtained by first combining the initial word vector, the image feature vector, the etymon feature vector and the pinyin feature vector corresponding to the target word. And then, multiplying the combined vector by a preset weight matrix to obtain a target word vector corresponding to the target word. The initial word vector, the image feature vector, the etymon feature vector and the pinyin feature vector corresponding to the target word are combined, or the initial word vector, the image feature vector, the etymon feature vector and the pinyin feature vector corresponding to the target word are spliced to obtain a combined vector of 1 (V +1000+271+ 30).

In the embodiment of the application, when a target word vector corresponding to a target word needs to be generated, an initial word vector, an image feature vector, a etymon feature vector and a pinyin feature vector corresponding to the target word can be determined first. Then, a target word vector corresponding to the target word can be generated according to the initial word vector, the image feature vector, the etymon feature vector, the pinyin feature vector and the preset weight matrix corresponding to the target word. The embodiment of the application generates the word vector by combining the text information, the pictographic digital image information, the etymon information and the pinyin information, so that the generated word vector has rich characteristic information, can fully embody the attribute characteristics of the word and accord with the characteristics of the Chinese characters, a more reliable word vector is provided for subsequent natural language processing, the accuracy of the natural language processing is improved, and the application range of the natural language processing is greatly expanded.

It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.

Fig. 6 shows a block diagram of a word vector generating apparatus provided in an embodiment of the present application, corresponding to the word vector generating method described in the above embodiment, and only shows portions related to the embodiment of the present application for convenience of description.

Referring to fig. 6, the word vector generating apparatus may include:

a target word obtaining module 601, configured to obtain a target word and determine an initial word vector corresponding to the target word;

a feature vector determining module 602, configured to determine an image feature vector corresponding to the target word, determine a radical feature vector corresponding to the target word, and determine a pinyin feature vector corresponding to the target word;

a word vector generating module 603, configured to generate a target word vector corresponding to the target word according to the initial word vector, the image feature vector, the etymon feature vector, the pinyin feature vector, and a preset weight matrix corresponding to the target word.

Illustratively, the word vector generating apparatus may further include:

the word table building module is used for building a word table, and the word table comprises a plurality of preset words;

the image acquisition module is used for acquiring a pictographic image corresponding to each preset character;

and the image characteristic vector construction module is used for constructing the image characteristic vector corresponding to the preset word according to the pictographic image corresponding to the preset word.

Specifically, the pictographic image corresponding to the preset word includes a plurality of pictographic images, and the image feature vector construction module is specifically configured to input the plurality of pictographic images corresponding to the preset word to a preset image recognition model respectively, and acquire each initial image feature vector extracted by a target network layer of the image recognition model, where the target network layer is a last layer of network of the image recognition model; and performing mean value calculation on the initial image feature vectors, and determining the mean value image feature vector obtained by calculation as the image feature vector corresponding to the preset word.

Optionally, the word vector generating apparatus may further include:

and the etymon feature vector construction module is used for acquiring a basic etymon and a preset etymon corresponding to each preset character, and constructing the etymon feature vectors corresponding to each preset character according to the basic etymon and the preset etymon corresponding to each preset character.

In a possible implementation manner, the word vector generating apparatus may further include:

the first training text acquisition module is used for acquiring a training text and splitting the training text to obtain each training word;

the first central training character determining module is used for determining a central training character and acquiring an initial character vector corresponding to the central training character and a related training character corresponding to the central training character, wherein the central training character is any one of the training characters;

a first training vector obtaining module, configured to obtain an initial word vector, an image feature vector, a etymon feature vector, and a pinyin feature vector corresponding to the related training word, and combine the initial word vector, the image feature vector, the etymon feature vector, and the pinyin feature vector corresponding to the related training word to obtain a first training vector corresponding to the related training word;

the first training result acquisition module is used for inputting the first training vector into a first word vector model for processing to obtain a first training result output by the first word vector model;

a first training error determining module, configured to determine a first training error of the first word vector model according to the first training result and an initial word vector corresponding to the central training word;

a first model parameter adjusting module, configured to adjust a first model parameter of the first word vector model when the first training error does not satisfy a first preset condition, and return to execute the step of obtaining a training text, and perform splitting processing on the training text to obtain steps of each training word and subsequent steps, where the first model parameter includes the preset weight matrix, and the preset weight matrix is a weight matrix between an input layer and a hidden layer of the first word vector model;

and the first preset weight matrix obtaining module is used for obtaining the preset weight matrix when the first training error meets the first preset condition.

In another possible implementation manner, the word vector generating apparatus may further include:

the second training text acquisition module is used for acquiring a training text and splitting the training text to obtain each training word;

a second central training character determining module, configured to determine a central training character, and obtain a related training character corresponding to the central training character and an initial character vector corresponding to the related training character, where the central training character is any one of the training characters;

a second training vector obtaining module, configured to obtain an initial word vector, an image feature vector, a etymon feature vector, and a pinyin feature vector corresponding to the central training word, and combine the initial word vector, the image feature vector, the etymon feature vector, and the pinyin feature vector corresponding to the central training word to obtain a second training vector corresponding to the central training word;

the second training result acquisition module is used for inputting the second training vector into a second word vector model for processing to obtain a second training result output by the second word vector model;

a second training error determining module, configured to determine a second training error of the second word vector model according to the second training result and the initial word vector corresponding to the relevant training word;

a second model parameter adjusting module, configured to adjust a second model parameter of the second word vector model when the second training error does not satisfy a second preset condition, and return to execute the step of obtaining the training text, and perform splitting processing on the training text to obtain steps of each training word and subsequent steps, where the second model parameter includes the preset weight matrix, and the preset weight matrix is a weight matrix between an input layer and a hidden layer of the second word vector model;

and the second preset weight matrix obtaining module is used for obtaining the preset weight matrix when the second training error meets the second preset condition.

Optionally, the word vector generating module 603 may include:

the vector combination unit is used for combining the initial word vector, the image characteristic vector, the etymon characteristic vector and the pinyin characteristic vector corresponding to the target word to obtain a combination vector corresponding to the target word;

and the word vector generating unit is used for multiplying the combined vector by the preset weight matrix to obtain a target word vector corresponding to the target word.

It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and specific reference may be made to the part of the embodiment of the method, which is not described herein again.

It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.

Fig. 7 is a schematic structural diagram of a terminal device according to an embodiment of the present application. As shown in the figure, the terminal device 6 of this embodiment includes: at least one processor 70 (only one shown in fig. 7), a memory 71, and a computer program 72 stored in the memory 71 and executable on the at least one processor 70, the processor 70 implementing the steps in any of the various word vector generation method embodiments described above when executing the computer program 72.

The terminal device may include, but is not limited to, a processor 70, a memory 71. Those skilled in the art will appreciate that fig. 7 is merely an example of the terminal device 6, and does not constitute a limitation to the terminal device 7, and may include more or less components than those shown, or combine some components, or different components, such as an input-output device, a network access device, and the like.

The processor 70 may be a Central Processing Unit (CPU), and the processor 70 may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), field-programmable gate arrays (FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.

The memory 71 may in some embodiments be an internal storage unit of the terminal device 7, such as a hard disk or a memory of the terminal device 7. In other embodiments, the memory 71 may also be an external storage device of the terminal device 7, such as a plug-in hard disk, a Smart Memory Card (SMC), a Secure Digital (SD) card, a flash card (flash card), and the like, which are equipped on the terminal device 7. Further, the memory 71 may also include both an internal storage unit and an external storage device of the terminal device 7. The memory 71 is used for storing an operating system, an application program, a BootLoader (BootLoader), data, and other programs, such as program codes of the computer program. The memory 71 may also be used to temporarily store data that has been output or is to be output.

The embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the steps in the above-mentioned method embodiments may be implemented.

The embodiments of the present application provide a computer program product, which when running on a terminal device, enables the terminal device to implement the steps in the above method embodiments when executed.

The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable storage medium may include at least: any entity or device capable of carrying computer program code to the apparatus/terminal device, recording medium, computer memory, read-only memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, and software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable storage media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and proprietary practices.

In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.

Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.

In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.

The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.

The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

22页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种基于图卷积网络的远程监督关系抽取方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!