Text similarity determination method, device, equipment and medium

文档序号:1127733 发布日期:2020-10-02 浏览:12次 中文

阅读说明:本技术 文本相似度确定方法、装置、设备和介质 (Text similarity determination method, device, equipment and medium ) 是由 余晓峰 瞿康 韩友 郑立涛 于 2020-06-12 设计创作,主要内容包括:本申请公开了一种文本相似度确定方法、装置、设备和介质,涉及自然语言处理技术。具体实现方案为:利用孪生网络结构中的两个特征提取网络,分别对第一输入文本和第二输入文本进行编码,得到第一输入文本和第二输入文本的句嵌入,其中,特征提取网络是利用词在网络中各层表征并通过区分词在文本中的重要性对文本进行编码;通过计算第一输入文本和第二输入文本的句嵌入之间的相似度,确定第一输入文本和第二输入文本的相似度。本申请实施例利用孪生网络架构确定文本相似度,并且利用词在网络中各层表征并通过区分词在文本中的重要性对文本进行编码,使得编码后得到的句嵌入更加符合文本的语义信息,准确性更高,从而提高文本相似度判断的准确性。(The application discloses a text similarity determination method, a text similarity determination device, text similarity determination equipment and a text similarity determination medium, and relates to a natural language processing technology. The specific implementation scheme is as follows: respectively coding a first input text and a second input text by utilizing two feature extraction networks in a twin network structure to obtain sentence embedding of the first input text and the second input text, wherein the feature extraction networks are used for coding the texts by utilizing word representation in each layer of the network and distinguishing the importance of the words in the texts; the similarity between the first input text and the second input text is determined by calculating the similarity between sentence embeddings of the first input text and the second input text. The method and the device for determining the text similarity determine the text similarity by using the twin network architecture, and encode the text by using the representation of words in each layer in the network and distinguishing the importance of the words in the text, so that sentences obtained after encoding are embedded into semantic information which is more consistent with the text, the accuracy is higher, and the accuracy of text similarity determination is improved.)

1. A text similarity determination method comprises the following steps:

respectively coding a first input text and a second input text by utilizing two feature extraction networks in a twin network structure to obtain sentence embedding of the first input text and the second input text, wherein the feature extraction networks are used for coding the texts by utilizing word representation in each layer of the network and distinguishing the importance of words in the texts;

and determining the similarity of the first input text and the second input text by calculating the similarity between sentence embedding of the first input text and the second input text.

2. The method of claim 1, wherein the feature extraction network is a transform network.

3. The method of claim 1, wherein the two feature extraction networks share network parameters.

4. The method of claim 2, wherein the process of coding any target text by the feature extraction network to obtain sentence embedding of the target text comprises:

calculating the contextualized word vector of each word in the target text, and performing weighted summation on the contextualized word vector of each word to obtain sentence embedding of the target text;

wherein the contextualized word vector is a weighted sum of word characterizations at each layer in the transform network.

5. The method of claim 4, wherein the weight corresponding to the contextualized word vector for each word is determined by a ratio of a diagonal variance of the contribution matrix for each word to a sum of diagonal variances of the contribution matrices for each word;

wherein the contribution matrix is defined as follows:

in the contribution matrix G of the jth word w (j), the value of each matrix element gij is w (j) the similarity between the characterization of the ith layer and the characterization of the jth layer in the transform network, where i and j are both natural numbers.

6. The method of claim 4, wherein the words represent corresponding weights in each layer of the transform network, and the weights are comprehensive weights of parameters in each layer for measuring importance degree of the words in the text, and the parameters at least comprise alignment similarity and novelty.

7. A text similarity determination apparatus comprising:

the sentence embedding acquisition module is used for respectively coding a first input text and a second input text by utilizing two feature extraction networks in a twin network structure to obtain sentence embedding of the first input text and the second input text, wherein the feature extraction networks are used for coding the texts by utilizing word representation in each layer of the network and distinguishing the importance of words in the texts;

and the similarity determining module is used for determining the similarity of the first input text and the second input text by calculating the similarity between sentence embedding of the first input text and the second input text.

8. The apparatus of claim 7, wherein the feature extraction network is a transform network.

9. The apparatus of claim 7, wherein the two feature extraction networks share network parameters.

10. The apparatus of claim 8, wherein the process of coding any target text by the feature extraction network to obtain sentence embedding of the target text comprises:

calculating the contextualized word vector of each word in the target text, and performing weighted summation on the contextualized word vector of each word to obtain sentence embedding of the target text;

wherein the contextualized word vector is a weighted sum of word characterizations at each layer in the transform network.

11. The apparatus of claim 10, wherein the weight corresponding to the contextualized word vector for each word is determined by a ratio of a diagonal variance of the contribution matrix for each word to a sum of diagonal variances of the contribution matrices for each word;

wherein the contribution matrix is defined as follows:

in the contribution matrix G of the jth word w (j), the value of each matrix element gij is w (j) the similarity between the characterization of the ith layer and the characterization of the jth layer in the transform network, where i and j are both natural numbers.

12. The apparatus of claim 10, wherein the words represent corresponding weights in each layer of the transform network, and the weights are comprehensive weights of parameters in each layer for measuring importance degree of the words in the text, and the parameters at least include alignment similarity and novelty.

13. An electronic device, comprising:

at least one processor; and

a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,

the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the text similarity determination method of any one of claims 1-6.

14. A non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the text similarity determination method of any one of claims 1-6.

Technical Field

The present application relates to the field of internet, and in particular, to a method, an apparatus, a device, and a medium for determining text similarity.

Background

Text similarity refers to the semantic similarity between two pieces of text. Text similarity calculation is a fundamental and very critical problem in the field of NLP (natural language processing), and has a very important position and rich application scenes in the industry, such as information retrieval, hot problem recommendation, intelligent customer service and the like.

Therefore, it is important to determine the similarity between two texts accurately.

Disclosure of Invention

The embodiment of the application provides a method, a device, equipment and a medium for determining text similarity, so as to improve the accuracy of text similarity determination.

In a first aspect, an embodiment of the present application provides a text similarity determining method, including:

respectively coding a first input text and a second input text by utilizing two feature extraction networks in a twin network structure to obtain sentence embedding of the first input text and the second input text, wherein the feature extraction networks are used for coding the texts by utilizing word representation in each layer of the network and distinguishing the importance of words in the texts;

and determining the similarity of the first input text and the second input text by calculating the similarity between sentence embedding of the first input text and the second input text.

In a second aspect, an embodiment of the present application further provides a text similarity determining apparatus, including:

the sentence embedding acquisition module is used for respectively coding a first input text and a second input text by utilizing two feature extraction networks in a twin network structure to obtain sentence embedding of the first input text and the second input text, wherein the feature extraction networks are used for coding the texts by utilizing word representation in each layer of the network and distinguishing the importance of words in the texts;

and the similarity determining module is used for determining the similarity of the first input text and the second input text by calculating the similarity between sentence embedding of the first input text and the second input text.

In a third aspect, an embodiment of the present application further provides an electronic device, including:

at least one processor; and

a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,

the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a text similarity determination method according to any of the embodiments of the present application.

In a fourth aspect, the present application further provides a non-transitory computer-readable storage medium storing computer instructions for causing a computer to execute the text similarity determination method according to any embodiment of the present application.

According to the technical scheme of the embodiment of the application, the text similarity is determined by using the twin network architecture, each layer of representation of words in the network is used, and the text is coded by distinguishing the importance of the words in the text, so that sentences obtained after coding are embedded into semantic information which is more consistent with the text, the accuracy is higher, and the accuracy of text similarity judgment is improved.

It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present application, nor do they limit the scope of the present application. Other features of the present application will become readily apparent from the following description, and other effects of the above alternatives will be described hereinafter in conjunction with specific embodiments.

Drawings

The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:

fig. 1 is a flowchart illustrating a text similarity determining method according to a first embodiment of the present application;

fig. 2 is a flowchart illustrating a text similarity determining method according to a second embodiment of the present application;

fig. 3 is a schematic structural diagram of a text similarity determination apparatus according to a third embodiment of the present application;

fig. 4 is a block diagram of an electronic device for implementing a text similarity determination method according to an embodiment of the present application.

Detailed Description

The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.

Fig. 1 is a flowchart illustrating a text similarity determining method according to a first embodiment of the present application, which is applicable to a case of determining text similarity, for example, in a scenario of intelligent customer service or hot question recommendation, a text similar to an input query (search term) is searched from a search library. The method may be performed by a text similarity determination apparatus, which is implemented in software and/or hardware, and is preferably configured in an electronic device, such as a computer device or a server. As shown in fig. 1, the method specifically includes the following steps:

s101, coding a first input text and a second input text respectively by utilizing two feature extraction networks in a twin network structure to obtain sentence embedding of the first input text and the second input text, wherein the feature extraction networks are used for coding the texts by utilizing word representation in each layer of the network and distinguishing the importance of words in the texts.

The twin Network structure (simple Network) comprises two feature extraction networks, the two feature extraction networks have the same Network structure and share Network parameters, the two texts are respectively encoded by the same method, and the texts are mapped into a new space to obtain sentence embedding (i.e. vector representation of the texts). Because the twin network structure is parameter-sharing, the model is smaller and easier to train.

It should be noted that, in the conventional encoding method, after each layer of the network performs feature extraction on an input text, the posing method usually obtains a final embedding of the text by using the output of the last layer in the network and adopting maximum pooling or average pooling, but the text is composed of a plurality of words, and because the importance of the words in the text is not considered, high-quality sentence embedding cannot be finally obtained, and accurate similarity calculation and similar text search cannot be performed.

Therefore, in the embodiment of the present application, the importance of the word in the text is considered in the encoding, for example, the importance of the word is obviously higher than that of the dummy word or the pseudonym for the verb, the noun or the adjective; words that contribute to text semantics are also of higher importance than other words. Words in the text can be distinguished according to the importance of the words, larger weight is added to word vectors of words with high importance, smaller weight is added to word vectors of words with low importance, and finally obtained sentence embedding can embody the importance of different words, so that the sentence embedding is more consistent with semantic information of the text, and the accuracy is higher. Specifically, the importance of the words can be distinguished by using the representation of the words in each layer in the network, because the representation of each layer in the network is the result of word vectors or sentence vectors output by each network layer, different layers capture different language attributes of each word, and then more accurate sentence representation (sentencemembedding) can be obtained by fusing different language information across layers instead of only using the output of the last layer in the network, so that the semantic information of the words in the text can be more fully extracted, and the importance of the words can be accurately distinguished, thereby obtaining high-quality sentence embedding.

S102, determining the similarity of the first input text and the second input text by calculating the similarity between sentence embedding of the first input text and the second input text.

After sentence embedding of the first input text and the second input text is obtained through the feature extraction network, the similarity between the two sentence embedding can be calculated through the classifier softmax, and therefore the similarity between the first input text and the second input text is determined. For example, the similarity between sentence insertions can be calculated by cosine similarity, manhattan distance, euclidean distance, or the like, which is not limited in this embodiment of the present application.

Therefore, the two feature extraction networks and softmax form a set of complete twin network architecture, wherein the two feature extraction networks share parameters and are used for encoding the input text to obtain vector representation of the text in a new space, namely sentence embedding, and the softmax is used for similarity calculation of the sentence embedding of the two texts. During training, whether each pair of training samples are similar or not can be labeled, the loss function values of the similar pair of samples are minimized in the training stage, and the loss function values of the dissimilar pair of samples are maximized. That is, through training, the network learns a similarity measure from the data, and the learned measure is used to generate similar embedding for similar text sentences, so as to be used for comparing new unknown samples. Therefore, the twin network architecture is also suitable for the case of a large number of classes but lack of training samples, and is computationally efficient.

In a specific embodiment, the feature extraction network is a transform network. The Transformer is an NLP model which is high in calculation speed and high in accuracy, a Transformer network and a twin network structure are combined, the Transformer network is used for coding two input texts, and then the similarity of the two input texts is calculated, so that the method has high execution efficiency, and is suitable for industrial applications such as intelligent customer service, hot problem recommendation and the like, wherein the industrial applications include real-time large-scale semantic similarity judgment, search and regression based on the semantic similarity and the like.

According to the technical scheme of the embodiment of the application, the text similarity is determined by using the twin network architecture, words are represented in each layer of the network, and the text is coded by distinguishing the importance of the words in the text, so that sentences obtained after coding are embedded into semantic information which is more consistent with the text, the accuracy is higher, and the accuracy of text similarity judgment is improved.

Fig. 2 is a flowchart illustrating a text similarity determining method according to a second embodiment of the present application, where the present embodiment further optimizes an encoding operation by taking a transform network as an example on the basis of the foregoing embodiments. As shown in fig. 2, the method specifically includes the following steps:

s201, coding a first input text and a second input text respectively by using two transform networks in a twin network structure to obtain sentence embedding of the first input text and the second input text, wherein the transform networks are used for coding the texts based on the weighted sum of contextualized word vectors of each word.

Wherein a contextualized word vector is defined as a weighted sum of representations of words at each level in the transform network. That is, since different layers of the transformer capture different language attributes of each word, more accurate sentence representation (sense embedding) can be obtained by fusing different language information across layers, semantic information of the word in the text can be more fully extracted, and further, the importance of different words can be embodied, thereby obtaining high-quality sentence embedding.

Therefore, the process of encoding any target text by a transform network to obtain sentence embedding of the target text includes: calculating the contextualized word vector of each word in the target text, and performing weighted summation on the contextualized word vector of each word to obtain sentence embedding of the target text; wherein the contextualized word vector is a weighted sum of word characterizations at each layer in the transform network.

Wherein, the weight corresponding to the contextualized word vector of each word is determined by the ratio of the diagonal variance of the contribution matrix of each word to the sum of the diagonal variances of the contribution matrix of each word;

wherein the contribution matrix is defined as follows:

in the contribution matrix G of the jth word w (j), the value of each matrix element gij is w (j) the similarity of the characterization of the ith layer and the characterization of the jth layer in the transform network, where i and j are both natural numbers and the diagonal variance is offset-1 diagonal variance.

In one embodiment, the characterization of each layer in the transform network is a word vector result output by each layer, and the value of each matrix element gij in the contribution matrix is determined by the similarity between the characterization of the ith layer and the characterization of the jth layer. After determining the contribution matrix of each word, calculating the offset-1 diagonal variance of the matrix, and calculating the ratio of the offset-1 diagonal variance of the contribution matrix of each word to the sum of the offset-1 diagonal variances of the contribution matrices of each word, and using the ratio as the weight corresponding to the contextualized word vector of each word. Moreover, the words with large offset-1 diagonal variance are mainly important words, such as nouns and verbs, which usually carry richer semantic information, while the words with small offset-1 diagonal variance have less information amount, so that the important word variance is larger and the corresponding weight is larger by taking the ratio as the weight corresponding to the contextualized word vector of each word, thereby realizing the distinction of words with different importance degrees to obtain more accurate sentence embedding.

In addition, the words represent the corresponding weight in each layer of the transform network, and the weight is the comprehensive weight of the parameters used for measuring the importance degree of the words in the text in each layer, and the parameters at least comprise the alignment similarity and the novelty. The parameters are summed according to the respective proportions to obtain the comprehensive weight, and the more important words have higher weight, so that the words with different importance degrees are further distinguished.

S202, determining the similarity of the first input text and the second input text by calculating the similarity between sentence embedding of the first input text and the second input text.

In addition, in one embodiment, the model may be pruned and distilled to optimize the prediction speed of the model.

According to the technical scheme, the text similarity is determined by using the twin network architecture, the semantic information of the words in the text can be fully extracted by fusing different language information in a cross-layer mode, the importance of the words is distinguished based on the semantic information, accurate sentence expression is obtained, high-quality sentence embedding is obtained, and the accuracy of text similarity judgment is improved.

Fig. 3 is a schematic structural diagram of a text similarity determination apparatus according to a third embodiment of the present application, which is applicable to a case of determining text similarity, for example, in a scenario of intelligent customer service or hot question recommendation, a text similar to an input query is searched from a search library. The device can realize the text similarity determination method in any embodiment of the application. As shown in fig. 3, the apparatus 300 specifically includes:

a sentence embedding obtaining module 301, configured to encode a first input text and a second input text respectively by using two feature extraction networks in a twin network structure to obtain sentence embedding of the first input text and the second input text, where the feature extraction networks are used for encoding texts by using word representations in each layer of the network and by distinguishing importance of words in the texts;

a similarity determining module 302, configured to determine a similarity between the first input text and the second input text by calculating a similarity between sentence insertions of the first input text and the second input text.

Optionally, the feature extraction network is a transform network.

Optionally, the two feature extraction networks share network parameters.

Optionally, the process of coding any target text by the feature extraction network to obtain sentence embedding of the target text includes:

calculating the contextualized word vector of each word in the target text, and performing weighted summation on the contextualized word vector of each word to obtain sentence embedding of the target text;

wherein the contextualized word vector is a weighted sum of word characterizations at each layer in the transform network.

Optionally, the weight corresponding to the contextualized word vector of each word is determined by a ratio of a diagonal variance of the contribution matrix of each word to a sum of diagonal variances of the contribution matrices of the words;

wherein the contribution matrix is defined as follows:

in the contribution matrix G of the jth word w (j), the value of each matrix element gij is w (j) the similarity between the characterization of the ith layer and the characterization of the jth layer in the transform network, where i and j are both natural numbers.

Optionally, the words represent corresponding weights in each layer of the transform network, and the weights are comprehensive weights of parameters in each layer for measuring the importance degree of the words in the text, where the parameters at least include alignment similarity and novelty.

The text similarity determining apparatus 300 provided in the embodiment of the present application may execute the text similarity determining method provided in any embodiment of the present application, and has functional modules and beneficial effects corresponding to the executing method. Reference may be made to the description of any method embodiment of the present application for details not explicitly described in this embodiment.

According to an embodiment of the present application, an electronic device and a readable storage medium are also provided.

Fig. 4 is a block diagram of an electronic device according to the text similarity determining method in the embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.

As shown in fig. 4, the electronic apparatus includes: one or more processors 401, memory 402, and interfaces for connecting the various components, including high-speed interfaces and low-speed interfaces. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). In fig. 4, one processor 401 is taken as an example.

Memory 402 is a non-transitory computer readable storage medium as provided herein. The memory stores instructions executable by at least one processor to cause the at least one processor to perform the text similarity determination method provided herein. The non-transitory computer-readable storage medium of the present application stores computer instructions for causing a computer to execute the text similarity determination method provided by the present application.

The memory 402, which is a non-transitory computer-readable storage medium, may be used to store non-transitory software programs, non-transitory computer-executable programs, and modules, such as program instructions/modules corresponding to the text similarity determination method in the embodiment of the present application (for example, the sentence embedding acquisition module 301 and the similarity determination module 302 shown in fig. 3). The processor 401 executes various functional applications of the server and data processing by running non-transitory software programs, instructions, and modules stored in the memory 402, that is, implements the text similarity determination method in the above-described method embodiment.

The memory 402 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of an electronic device that implements the text similarity determination method of the embodiment of the present application, and the like. Further, the memory 402 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 402 may optionally include a memory remotely located from the processor 401, and such remote memory may be connected via a network to an electronic device implementing the text similarity determination method of embodiments of the present application. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.

The electronic device for implementing the text similarity determination method according to the embodiment of the present application may further include: an input device 403 and an output device 404. The processor 401, the memory 402, the input device 403 and the output device 404 may be connected by a bus or other means, and fig. 4 illustrates an example of a connection by a bus.

The input device 403 may receive input numeric or character information and generate key signal inputs related to user settings and function control of an electronic apparatus implementing the text similarity determination method of the embodiments of the present application, such as an input device of a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointing stick, one or more mouse buttons, a track ball, a joystick, or the like. The output devices 404 may include a display device, auxiliary lighting devices (e.g., LEDs), and haptic feedback devices (e.g., vibrating motors), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.

Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.

These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.

To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.

The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), the internet, and blockchain networks.

The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.

According to the technical scheme of the embodiment of the application, the text similarity is determined by using the twin network architecture, each layer of representation of words in the network is used, and the text is coded by distinguishing the importance of the words in the text, so that sentences obtained after coding are embedded into semantic information which is more consistent with the text, the accuracy is higher, and the accuracy of text similarity judgment is improved.

It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and the present invention is not limited thereto as long as the desired results of the technical solutions disclosed in the present application can be achieved.

The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

12页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种使用DNA字符码存储文字点阵的方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!