Comment information display method and device

文档序号:1937514 发布日期:2021-12-07 浏览:15次 中文

阅读说明:本技术 评论信息展示方法和装置 (Comment information display method and device ) 是由 张晓辉 谢奇奇 刘朋樟 于 2021-02-02 设计创作,主要内容包括:本公开提出一种评论信息展示方法和装置,涉及信息处理领域。其中方法包括:获取待处理的一条评论信息,并且将所述评论信息切分为多个评论语句;根据预设的业务评估项,确定每个评论语句的业务评估值;确定所述多个评论语句中的每两个评论语句之间的相似度,对于相似度大于相似度阈值的两个评论语句,过滤掉其中一个业务评估值较低的评论语句;从保留下来的所有评论语句中,抽取至少一个评论语句,使得抽取出来的所有评论语句在总信息长度不超过信息展示长度的情况下业务评估值的总值最大;对抽取出来的所有评论语句进行展示。满足信息展示空间受限场景下的信息展示需求,使得用户能够快速有效地获取信息,并且所展示信息的多样性良好。(The disclosure provides a comment information display method and device, and relates to the field of information processing. The method comprises the following steps: obtaining a piece of comment information to be processed, and segmenting the comment information into a plurality of comment sentences; determining a service evaluation value of each comment statement according to a preset service evaluation item; determining the similarity between every two comment sentences in the comment sentences, and filtering one comment sentence with a lower service evaluation value for the two comment sentences with the similarity greater than a similarity threshold value; extracting at least one comment statement from all the reserved comment statements, so that the total value of the business evaluation value of all the extracted comment statements is the largest under the condition that the total information length does not exceed the information display length; and displaying all extracted comment sentences. The information display requirement under the scene that the information display space is limited is met, so that the user can quickly and effectively acquire the information, and the displayed information has good diversity.)

1. A comment information display method is characterized by comprising the following steps:

obtaining a piece of comment information to be processed, and segmenting the comment information into a plurality of comment sentences;

determining a service evaluation value of each comment statement according to a preset service evaluation item;

determining the similarity between every two comment sentences in the comment sentences, and filtering one comment sentence with a lower service evaluation value for the two comment sentences with the similarity greater than a similarity threshold value;

extracting at least one comment statement from all the reserved comment statements, so that the total value of the business evaluation value of all the extracted comment statements is the largest under the condition that the total information length does not exceed the information display length;

and displaying all extracted comment sentences.

2. The method of claim 1, wherein determining a similarity between each two of the plurality of comment sentences comprises:

and inputting any two comment sentences in the comment sentences into a similarity model, and obtaining the similarity of the any two comment sentences output by the similarity model, wherein the similarity model is obtained by training a twin neural network.

3. The method of claim 1, wherein training the twin neural network comprises:

aiming at a plurality of groups of training samples, respectively inputting a first training sentence and a second training sentence in each group of training samples into a first neural network and a second neural network of a twin neural network to obtain a vector of the first training sentence output by the first neural network and a vector of the second training sentence output by the second neural network;

calculating a loss value according to a loss function, updating parameters of the twin neural network according to the loss value, and taking the trained twin neural network as a similarity model, wherein the loss function is constructed according to similarity marking information between a first training statement and a second training statement of each of the plurality of groups of training samples and distance information between a vector of the first training statement and a vector of the second training statement.

4. The method of claim 1, wherein determining the traffic assessment value for each review sentence comprises one or more of:

determining positive and negative evaluation values of each comment statement;

determining a service category evaluation value of each comment sentence;

and determining the evaluation value of the matching degree of the preset sentence characteristics of each comment sentence.

5. The method of claim 4,

determining the positive and negative evaluation value of each comment sentence includes: inputting each comment statement into a positive and negative surface evaluation model, and acquiring a positive and negative surface evaluation value of each comment statement output by the positive and negative surface evaluation model;

determining the service category evaluation value of each comment sentence includes: inputting each comment sentence into a service category evaluation model, and acquiring a service category evaluation value of each comment sentence output by the service category evaluation model;

wherein the positive and negative evaluation model and the service class evaluation model are obtained by performing multi-task training on a first convolutional neural network and a second convolutional neural network.

6. The method of claim 5,

during multitask training, parameters of a word embedding layer, a convolutional layer and a pooling layer of a first convolutional neural network and a second convolutional neural network are shared, a loss function of the multitask training is determined according to a weighted sum of a first loss function of the first convolutional neural network and a second loss function of the second convolutional neural network, the first loss function is determined according to the total number of categories of the first convolutional neural network and values of the categories, and the second loss function is determined according to the total number of categories of the second convolutional neural network and the values of the categories.

7. The method of claim 4, wherein determining a degree-of-matching evaluation value for the preset sentence feature of each comment sentence comprises:

pre-calculating the co-occurrence probability of n words in the corpus related to the preset sentence characteristics by utilizing an improved Kneser Ney smooth n-gram language model;

determining the co-occurrence probability of n words in each comment sentence by matching the pre-calculated results;

and determining the confusion degree of each comment sentence according to the co-occurrence probability of all the n words of each comment sentence, and using the confusion degree as a matching degree evaluation value.

8. The method of claim 4, wherein determining a traffic assessment value for each review sentence comprises:

when there are a plurality of business evaluation items, weighted summation calculation is performed on the various evaluation values of each comment sentence, and the result of weighted summation is taken as the business evaluation value of each comment sentence.

9. The method of claim 1, wherein extracting at least one comment sentence from all the comment sentences that remain comprises:

by utilizing a knapsack problem solving method, the information display length is used as the size of a knapsack, the total value of the business evaluation values of all extracted comment sentences is used as the value of the knapsack, and at least one comment sentence is extracted from all the reserved comment sentences, so that the total value of the business evaluation values of all the extracted comment sentences is the largest under the condition that the total information length does not exceed the information display length.

10. The method of claim 1, further comprising:

and filtering the preset words appearing in the comment sentences or filtering the comment sentences containing the preset words by using the preset words for filtering in the filtering word bank.

11. The method of claim 10, wherein filtering the preset words for filtering in the thesaurus comprises: context breakover words, negative words, marketing words.

12. A comment information presentation apparatus comprising:

a memory; and

a processor coupled to the memory, the processor configured to execute the review information presentation method of any of claims 1-11 based on instructions stored in the memory.

13. A comment information presentation apparatus characterized by comprising:

the comment processing system comprises an information acquisition and segmentation module, a comment processing module and a comment processing module, wherein the information acquisition and segmentation module is configured to acquire a piece of comment information to be processed and segment the comment information into a plurality of comment sentences;

the evaluation value determining module is configured to determine a business evaluation value of each comment statement according to a preset business evaluation item;

the filtering module is configured to determine similarity between every two comment sentences in the comment sentences, and filter out one comment sentence with a lower business evaluation value for the two comment sentences with the similarity larger than a similarity threshold value;

the sentence extraction module is configured to extract at least one comment sentence from all the reserved comment sentences, so that the total value of the business evaluation value of all the extracted comment sentences is the maximum under the condition that the total information length does not exceed the information display length;

and the display module is configured to display all extracted comment sentences.

14. The apparatus of claim 13,

the filtering module is further configured to filter preset words appearing in the comment sentences or filter the comment sentences containing the preset words by using the preset words for filtering in the filtering word bank.

15. A non-transitory computer-readable storage medium, on which a computer program is stored, which when executed by a processor, implements the steps of the review information presentation method of any one of claims 1-11.

Technical Field

The disclosure relates to the field of information processing, in particular to a comment information display method and device.

Background

With the development of technology, many users use mobile devices for conducting e-commerce transactions. The screen of the mobile device is relatively small and the information display space is limited. Some high-quality comments have long space and occupy large information display space when being displayed on mobile equipment, so that the e-commerce platform only displays the previous small part of the comments or selects the short comments for display under a commodity display page, so that the displayed comments often have no substantial content and cannot reflect commodity characteristics, and a user cannot quickly and effectively acquire information. Therefore, it is becoming more and more important how to deliver more high-quality content in the limited information display space of the mobile device so that the user can quickly and effectively acquire information.

Disclosure of Invention

In order to solve the above problem, the embodiment of the disclosure provides a comment information display method and device.

Some embodiments of the present disclosure provide a comment information display method, including:

obtaining a piece of comment information to be processed, and segmenting the comment information into a plurality of comment sentences;

determining a service evaluation value of each comment statement according to a preset service evaluation item;

determining the similarity between every two comment sentences in the comment sentences, and filtering one comment sentence with a lower service evaluation value for the two comment sentences with the similarity greater than a similarity threshold value;

extracting at least one comment statement from all the reserved comment statements, so that the total value of the business evaluation value of all the extracted comment statements is the largest under the condition that the total information length does not exceed the information display length;

and displaying all extracted comment sentences.

In some embodiments, determining a similarity between each two of the plurality of comment sentences comprises:

and inputting any two comment sentences in the comment sentences into a similarity model, and obtaining the similarity of the any two comment sentences output by the similarity model, wherein the similarity model is obtained by training a twin neural network.

In some embodiments, training the twin neural network comprises:

aiming at a plurality of groups of training samples, respectively inputting a first training sentence and a second training sentence in each group of training samples into a first neural network and a second neural network of a twin neural network to obtain a vector of the first training sentence output by the first neural network and a vector of the second training sentence output by the second neural network;

calculating a loss value according to a loss function, updating parameters of the twin neural network according to the loss value, and taking the trained twin neural network as a similarity model, wherein the loss function is constructed according to similarity marking information between a first training statement and a second training statement of each of the plurality of groups of training samples and distance information between a vector of the first training statement and a vector of the second training statement.

In some embodiments, determining the traffic assessment value for each review sentence includes one or more of:

determining positive and negative evaluation values of each comment statement;

determining a service category evaluation value of each comment sentence;

and determining the evaluation value of the matching degree of the preset sentence characteristics of each comment sentence.

In some embodiments, determining the positive and negative evaluation value of each comment sentence includes: inputting each comment statement into a positive and negative surface evaluation model, and acquiring a positive and negative surface evaluation value of each comment statement output by the positive and negative surface evaluation model; determining the service category evaluation value of each comment sentence includes: inputting each comment sentence into a service category evaluation model, and acquiring a service category evaluation value of each comment sentence output by the service category evaluation model; wherein the positive and negative evaluation model and the service class evaluation model are obtained by performing multi-task training on a first convolutional neural network and a second convolutional neural network.

In some embodiments, during the multitask training, parameters of the word embedding layer, the convolutional layer and the pooling layer of the first convolutional neural network and the second convolutional neural network are shared, a loss function of the multitask training is determined according to a weighted sum of a first loss function of the first convolutional neural network and a second loss function of the second convolutional neural network, the first loss function is determined according to the total number of classes of the first convolutional neural network and values of the classes, and the second loss function is determined according to the total number of classes of the second convolutional neural network and the values of the classes.

In some embodiments, determining the degree-of-matching evaluation value of the preset sentence feature of each comment sentence includes:

pre-calculating the co-occurrence probability of n words in the corpus related to the preset sentence characteristics by utilizing an improved Kneser Ney smooth n-gram language model;

determining the co-occurrence probability of n words in each comment sentence by matching the pre-calculated results;

and determining the confusion degree of each comment sentence according to the co-occurrence probability of all the n words of each comment sentence, and using the confusion degree as a matching degree evaluation value.

In some embodiments, determining the traffic assessment value for each review sentence comprises: when there are a plurality of business evaluation items, weighted summation calculation is performed on the various evaluation values of each comment sentence, and the result of weighted summation is taken as the business evaluation value of each comment sentence.

In some embodiments, extracting at least one comment sentence from all the comment sentences that remain includes: by utilizing a knapsack problem solving method, the information display length is used as the size of a knapsack, the total value of the business evaluation values of all extracted comment sentences is used as the value of the knapsack, and at least one comment sentence is extracted from all the reserved comment sentences, so that the total value of the business evaluation values of all the extracted comment sentences is the largest under the condition that the total information length does not exceed the information display length.

In some embodiments, the method further comprises: and filtering the preset words appearing in the comment sentences or filtering the comment sentences containing the preset words by using the preset words for filtering in the filtering word bank.

In some embodiments, filtering the preset words for filtering in the thesaurus comprises: context breakover words, negative words, marketing words.

Some embodiments of the present disclosure provide a comment information display apparatus, including: a memory; and a processor coupled to the memory, the processor configured to execute a review information presentation method based on instructions stored in the memory.

Some embodiments of the present disclosure provide a comment information display apparatus, including:

the comment processing system comprises an information acquisition and segmentation module, a comment processing module and a comment processing module, wherein the information acquisition and segmentation module is configured to acquire a piece of comment information to be processed and segment the comment information into a plurality of comment sentences;

the evaluation value determining module is configured to determine a business evaluation value of each comment statement according to a preset business evaluation item;

the filtering module is configured to determine similarity between every two comment sentences in the comment sentences, and filter out one comment sentence with a lower business evaluation value for the two comment sentences with the similarity larger than a similarity threshold value;

the sentence extraction module is configured to extract at least one comment sentence from all the reserved comment sentences, so that the total value of the business evaluation value of all the extracted comment sentences is the maximum under the condition that the total information length does not exceed the information display length;

and the display module is configured to display all extracted comment sentences.

In some embodiments, the filtering module is further configured to filter preset words appearing in the comment sentences or filter comment sentences containing the preset words by using the preset words for filtering in the filtering word bank.

Some embodiments of the present disclosure provide a non-transitory computer readable storage medium having stored thereon a computer program that, when executed by a processor, performs the steps of the comment information presentation method.

According to the method and the device, similar sentences are filtered aiming at a plurality of original comment sentences, and part of comment sentences meeting the business evaluation requirement are extracted for display, so that the information display requirement under the scene with limited information display space is met, a user can quickly and effectively acquire information, and the displayed information is good in diversity.

Drawings

The drawings that will be used in the description of the embodiments or the related art will be briefly described below. The present disclosure can be understood more clearly from the following detailed description, which proceeds with reference to the accompanying drawings.

It is to be understood that the drawings in the following description are merely exemplary of the disclosure, and that other drawings may be derived from those drawings by one of ordinary skill in the art without undue inventive faculty.

Fig. 1 shows a flow diagram of a review information presentation method of some embodiments of the present disclosure.

Fig. 2 is a schematic diagram illustrating a review information presentation method according to further embodiments of the present disclosure.

Figure 3 illustrates a schematic diagram of a twin neural network of some embodiments of the present disclosure.

Fig. 4a, 4b, 4c, 4d, 4e illustrate the LSTM model and its forgetting gate f, input gate i, internal memory unit c, output gate o, respectively, according to some embodiments of the present disclosure.

Fig. 5 illustrates a schematic diagram of multi-tasking training for negative assessment models and service class assessment models, according to some embodiments of the present disclosure.

Fig. 6 is a schematic structural diagram of a comment information display apparatus according to some embodiments of the present disclosure.

Fig. 7 is a schematic structural diagram of a comment information display apparatus according to another embodiment of the present disclosure.

Detailed Description

The technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure.

Unless otherwise specified, "first", "second", and the like in the present disclosure are described to distinguish different objects, and are not intended to mean size, timing, or the like.

Fig. 1 shows a flow diagram of a review information presentation method of some embodiments of the present disclosure.

As shown in fig. 1, the comment information presentation method of the embodiment includes: step 110-.

In step 110, a piece of comment information to be processed is acquired, and the comment information (also called a comment long sentence) is segmented into a plurality of comment sentences (also called comment short sentences).

In step 120, a service evaluation value of each comment sentence is determined according to a preset service evaluation item.

The service evaluation items can have one item or a plurality of items, and each service evaluation item corresponds to one service evaluation value. When one service evaluation item exists, the service evaluation value corresponding to the service evaluation item of the comment statement is the service evaluation value of the comment statement; when there are a plurality of business evaluation items, weighted summation calculation is performed on the various evaluation values of each comment sentence, and the result of weighted summation is taken as the business evaluation value of each comment sentence. The weight of each evaluation value can be set, and can be set according to the importance degree of each service evaluation item.

Determining the traffic evaluation value of each comment sentence includes one or more of: determining positive and negative evaluation values of each comment sentence to represent whether the comment sentences are positive, negative or neutral; determining a service category evaluation value of each comment statement to represent a service category to which the comment statement belongs, such as a service category or a non-service category, wherein the service category can be further subdivided into sub-service categories such as price, installation, packaging, distribution service and after-sales service; and determining the matching degree evaluation value of the preset sentence characteristics of each comment sentence so as to represent the matching degree of the sentence expression characteristics of the comment sentences and the preset sentence characteristics. In the case of a review sentence of a commodity, the preset sentence feature may be, for example, a reach article feature or a selling point feature. In the case of comment sentences of other articles or other businesses, the preset sentence features may be adaptively selected according to the articles or businesses, and are not limited to the foregoing examples.

Determining the positive and negative evaluation value of each comment sentence includes: inputting each comment statement into a positive and negative surface evaluation model, and acquiring a positive and negative surface evaluation value of each comment statement output by the positive and negative surface evaluation model; determining the service category evaluation value of each comment sentence includes: inputting each comment sentence into a service category evaluation model, and acquiring a service category evaluation value of each comment sentence output by the service category evaluation model; the positive and negative face evaluation model and the service category evaluation model are obtained by performing multi-task training on the first convolutional neural network and the second convolutional neural network.

During multitask training, parameters of a word embedding layer, a convolutional layer and a pooling layer of a first convolutional neural network and a second convolutional neural network are shared, a loss function of the multitask training is determined according to a weighted sum of a first loss function of the first convolutional neural network and a second loss function of the second convolutional neural network, the first loss function is determined according to the total number of categories of the first convolutional neural network and values of the categories, and the second loss function is determined according to the total number of categories of the second convolutional neural network and the values of the categories. As will be described in detail later.

Determining a matching degree evaluation value of the preset sentence characteristics of each comment sentence includes: pre-calculating the co-occurrence probability of n words in the corpus related to the preset sentence characteristics by utilizing an improved Kneser Ney smooth n-gram language model; determining the co-occurrence probability of n words in each comment sentence by matching the pre-calculated results; the degree of confusion of each comment sentence is determined from the co-occurrence probability of all the n words of each comment sentence, and is used as a matching degree evaluation value.

At step 130, the comment statement is filtered, for example, including at least one filtering process of steps 130a and 130 b. The execution of steps 130a and 130b does not define a sequential order.

In step 130a, preset words appearing in the comment sentences are filtered using the preset words for filtering in the filtering thesaurus, or the comment sentences containing the preset words are filtered.

The preset words for filtering in the filtering word bank comprise: at least one of a context milestone, a negative word, a marketing word, and other preset words.

In step 130b, the similarity between every two comment sentences in the plurality of comment sentences is determined, and for two comment sentences of which the similarity is greater than the similarity threshold, one of the comment sentences is filtered out, for example, one of the comment sentences of which the business evaluation value is low is filtered out.

Determining a similarity between each two of the plurality of comment sentences includes: and inputting any two comment sentences in the comment sentences into a similarity model, and obtaining the similarity of any two comment sentences output by the similarity model, wherein the similarity model is obtained by training the twin neural network.

Training the twin neural network comprises: aiming at a plurality of groups of training samples, respectively inputting a first training sentence and a second training sentence in each group of training samples into a first neural network and a second neural network of a twin neural network to obtain a vector of the first training sentence output by the first neural network and a vector of the second training sentence output by the second neural network; calculating a loss value according to a loss function, updating parameters of the twin neural network according to the loss value, and taking the trained twin neural network as a similarity model, wherein the loss function is constructed according to similarity marking information between a first training statement and a second training statement of each of the plurality of groups of training samples and distance information between a vector of the first training statement and a vector of the second training statement.

At step 140, at least one comment sentence is extracted from all the remaining comment sentences, so that the total value of the business evaluation values is maximized in the case where the total information length does not exceed the information presentation length.

Extracting at least one comment sentence from all the remaining comment sentences comprises: by utilizing a knapsack problem solving method, the information display length is used as the size of a knapsack, the total value of the service evaluation values of all extracted comment sentences is used as the value of the knapsack, and at least one comment sentence is extracted from all the reserved comment sentences, so that the total value of the service evaluation values (the value of the knapsack) of all the extracted comment sentences is the maximum under the condition that the total information length does not exceed the information display length (the size of the knapsack).

In step 150, all extracted comment sentences are displayed.

Similar sentences are filtered aiming at a plurality of original comment sentences, and part of comment sentences meeting the business evaluation requirement are extracted for display, so that the information display requirement under the scene with limited information display space is met, a user can quickly and effectively obtain information, and compared with a mode of extracting comment sentences according to keywords, the method has no limitation of the keywords, and the displayed information has good diversity.

Fig. 2 is a schematic diagram illustrating a review information presentation method according to further embodiments of the present disclosure.

As shown in fig. 2, the comment information presentation method of the embodiment includes: step 210, 240.

At step 210, data is collected and processed.

Five data are collected, namely comment statement pairs and similarity marking data thereof for the similarity model, comment statements and service category marking data thereof for the service category evaluation model, comment statements and positive and negative marking data thereof for the positive and negative evaluation model, first preset statement feature data (such as Japanese script data) and second preset statement feature data (such as selling point data). The manner in which the various data are collected and processed is described in detail below.

1) Comment statement pair for similarity model and similarity annotation data thereof

And extracting comment information of the electronic commerce platform, carrying out sentence cutting processing on the comment information to obtain a plurality of comment sentences, wherein every two comment sentences are a group of comment sentence pairs, if the two comment sentences in the group of comment sentence pairs are similar, the similarity is marked as 1, and if the two comment sentences in the group of comment sentence pairs are not similar, the similarity is marked as 0.

Examples of comment information: the merchant keeps good faith. The fruit is particularly sweet. Super trusted merchants. The comment sentence pairs and similarity annotation data of the comment sentence pairs are as follows:

('Merchant is very good-minded', 'fruit is very sweet', 0),

('fruit is particularly sweet', 'super honest merchants', 0),

('Merchant is very good-minded', 'super good-minded Merchant', 1).

2) Comment statement and service category marking data for service category evaluation model

Extracting comment information of an e-commerce platform, carrying out sentence cutting processing on the comment information to obtain a plurality of comment sentences, and carrying out service category marking on the comment sentences, wherein the service category can be subdivided into sub-service categories such as price, installation, packaging, distribution service, after-sale service and the like. The classification of the service classes may be determined according to different application scenarios and information presentation lengths.

3) Comment sentence for positive and negative evaluation model and positive and negative annotation data (i.e., emotion annotation data)

The comment information of the e-commerce platform is extracted, sentence cutting processing is carried out on the comment information to obtain a plurality of comment sentences, and positive and negative labeling (emotion labeling) is carried out on the comment sentences, wherein the positive labeling, the negative labeling, the neutral labeling and the like are carried out on the comment sentences.

4) First preset sentence characteristic data (Ruda seal data)

The article written by the dawner for the commodity is obtained from the e-commerce platform, data cleaning can be carried out according to needs, and the data cleaning can be carried out manually or in other automatic modes. The reach article is very professional in description of the characteristics and advantages of the commodity, and has strong influence on users.

Example (c): the sock type vamp is skin-friendly and breathable, has a cushioning effect, and is decorated with colored elements, so that visual bright points are increased.

5) Second set of sentence characteristic data (selling point data)

And acquiring selling point data of the commodity from the electronic commerce platform. The sales data describes important characteristics of the commodity with a small number of words, so that a user can quickly and efficiently acquire information.

Example (c): waterproof movement, bidirectional zippers, burden-reducing shoulder straps and internal interlayer design.

At step 220, the model is trained.

And training the three network models to obtain five service evaluation models. Firstly, a twin neural network is trained to obtain a similarity model for judging whether sentences are similar or not. And secondly, training a convolutional neural network (such as TextCNN) to obtain a positive and negative evaluation model or a service class evaluation model, and in order to improve the training effect, performing multi-task training on the first convolutional neural network and the second convolutional neural network and simultaneously obtaining a positive and negative evaluation model and a service class evaluation model. And thirdly, obtaining a first preset statement feature matching degree model or a second preset statement feature matching degree model based on a modified Kneser Ney Smoothing n-gram (an improved Kneser Ney smooth n-gram language model).

And training the twin neural network to obtain a similarity model for judging whether the sentences are similar. Twin neural networks (Siamese neural networks): is a type of neural network architecture that includes two or more identical sub-networks, with identical sub-network parameters shared. The twin neural network makes the distance between two sentences close to 0 when the two sentences are similar, and makes the distance between two sentences larger than a certain threshold value when the two sentences are dissimilar.

Figure 3 illustrates a schematic diagram of a twin neural network of some embodiments of the present disclosure. As shown in fig. 3, the twin neural network includes a first neural network on the left side and a second neural network on the right side. The first neural network and the second neural network both comprise a Bi-directional Long Short-Term Memory (Bi-LSTM) network, an averaging layer and a fully connected layer (dense), and parameters such as weights of the layers are shared. The BilSTM network is formed by combining a forward LSTM (Long Short-Term Memory) and a backward LSTM and is used for modeling statement context information.

For a set of comment statement pairs, for example ('merchant is in good faith' and 'super good faith merchant'), two comment statements pass through word embedding layers of respective networks, are respectively input into a left double-layer BilSt network and a right double-layer BilSt network, then respectively enter into a pooling layer (average) of the respective networks, and then respectively enter into a full connection layer (dense) of the respective networks.

The LSTM model can memorize valuable information and give up redundant memory, thereby reducing the learning difficulty and solving the problem of gradient explosion or gradient disappearance of the traditional RNN (Recurrent Neural Network) model. The LSTM neurons incorporate an input gate i, a forgetting gate f, an output gate o, and an internal memory unit c.

Fig. 4a, 4b, 4c, 4d, 4e show the LSTM model and its forgetting gate f, input gate i, internal memory unit c, output gate o, respectively. The black parts in fig. 4b, 4c, 4d, 4e are the forgetting gate f, the input gate i, the internal memory cell c, the output gate o, respectively, and the light gray parts are the other parts.

Forget the door f: controlling the degree of forgetting the input X and the output h of the previous hidden layer,

ft=σ(wf[xt,ht-1]+bf)

σ is Sigmoid activation functionWhen x approaches negative infinity, y approaches 0, and when x approaches positive infinity, y approaches 1; w is afWeight matrix, x, for forgetting gate ftFor the input of data in the current state, ht-1Is the output of the last time step, bfIs an offset vector.

An input gate i: the control input X and the current calculated state are updated to the extent of the memory cell,

it=σ(wi[xt,ht-1]+bi)

σ is Sigmoid activation functionwiIs a weight matrix, x, of the input gate itFor the input of data in the current state, ht-1Is the output of the last time step, biIs an offset vector.

Internal memory cell c:

c’=Tanh(wc[xt,ht-1]+bc),ct=ftct-1+itc’,

tanh is the Tanh activation functionwcIs a weight matrix, x, of the memory celltFor the input of data in the current state, ht-1Is the output of the last time step, bcAs an offset vector, ct-1Is the state vector of the last time step, ctIs the current time step state vector, ftIs a forgetting gate, itIs an input gate.

An output gate o: the control input X and the current output depend on the level of the current memory cell,

ot=σ(woxt+Uoht-1+bo),ht=otTanh(ct),

tanh is the Tanh activation functionwcIs a weight matrix, x, of the memory celltFor the input of data in the current state, ht-1Is the output of the last time step, bcAs an offset vector, ctIs the current time step state vector.

As mentioned above, after the two-layer BiLSTM encodes the statement information, it will be input to the pooling layer (average) to average the encoded data of all time steps, then input to a fully connected layer (dense), and finally use contrast Loss:

where Y is a similarity index of a comment sentence pair, Y is 1 when two comment sentences are similar and 0 when they are dissimilar, DwRepresenting a vector S obtained by encoding two comment sentences through dense layer1,S2N is the number of samples, and m is a set threshold, for example, 0.75.

The positive and negative evaluation models and the service category evaluation model are multi-classification models for the comment short sentence. The categories of the positive and negative evaluation model categories comprise positive, negative or neutral, the categories of the service category evaluation model categories comprise service categories or non-service categories, and the service categories can be further subdivided into sub-service categories such as price, installation, packaging, distribution service, after-sales service and the like. The bottom layers of the positive and negative plane evaluation models and the service class evaluation model are convolutional neural networks (such as TextCNN). Since the negative evaluation model and the service category evaluation model are classified according to the comment sentences, in order to improve the training efficiency, the embodiment adopts Multi-Task Learning (Multi-Task Learning) to jointly train the two models. Of course, those skilled in the art will appreciate that the negative evaluation model and the service class evaluation model may also be trained separately.

Fig. 5 illustrates a schematic diagram of multi-tasking training for negative assessment models and service class assessment models, according to some embodiments of the present disclosure.

As shown in fig. 5, the underlying model of multitask training: the first convolutional neural network on the left side and the second convolutional neural network on the right side each include a word embedding layer (word embedding), a convolutional layer, a pooling layer (max-over-time boosting), and a fully connected layer (dense). And during multi-task training, parameters of the word embedding layer, the convolution layer and the pooling layer of the first convolutional neural network and the second convolutional neural network are shared. The sharing parameters include, for example: word embedding word vectors, convolutional layer convolution kernel weight matrices, and max-over-time pooling layer parameter matrices, etc. The trained first convolutional neural network and the trained second convolutional neural network can be used as a positive and negative plane evaluation model and a service class evaluation model respectively.

The comment sentences firstly pass through a Word embedding layer of parameter sharing, Word vectors trained by Word2Vec are used in the Word embedding layer, the Word embedding layer outputs convolution layers of parameter sharing, the convolution layers are all one-dimensional convolution, 4 convolutions are used, and the widths of the convolution kernels are 1, 2, 3 and 5 respectively. The processing of the word embedding layer and the convolution layer is exemplified below. For example, for a comment phrase "the children and the elderly like to eat at home", the phrase is first segmented by using a jieba tool to obtain: baby old man in the family likes to eat, and after the word embedding layer, every word all can have a k dimension vector, when the convolution kernel is 1, the convolution in proper order: family, baby, old man, all, like, eat, when the convolution kernel is 2, the convolution in proper order: baby in the family, baby old man, old man all like, like to eat, when the convolution kernel is 3, the convolution in proper order: baby old man in the family, baby old man all, the old man likes, all likes to eat, when the convolution kernel is 5, the convolution in proper order: the old and the children in the family like the food, and the old and the children like the food. After being convolved, the data is input into a time sequence maximum pooling layer, and the layer mainly finds out the maximum data in all time steps after the comment short sentence sequence is convolved, and finally calculates the softmax multi-classification loss through a full-connection layer.

The loss function of the multitask training is determined according to the weighted sum of a first loss function of the first convolutional neural network and a second loss function of the second convolutional neural network, the first loss function is determined according to the total number of the classes of the first convolutional neural network and the value of each class, and the second loss function is determined according to the total number of the classes of the second convolutional neural network and the value of each class.

The first loss function of the first convolutional neural network is:

wherein f isiIs the value of the ith class, N1A total number of classes classified for the first convolutional neural network.

The second loss function of the second convolutional neural network is:

wherein f isjIs the value of the jth class, N2A total number of classes classified for the second convolutional neural network.

The loss function for multitask training is:

α and β are weights of the two loss functions, for example, α and β are 0.6 and 0.4, respectively.

A preset sentence feature matching degree model is obtained based on a modified Kneser Ney Smoothing n-gram (an improved Kneser Ney smooth n-gram language model). And pre-calculating the co-occurrence probability of n words in the corpus related to the preset sentence characteristics by using an improved Kneser Ney smooth n-gram language model, wherein the pre-calculation results can be regarded as a preset sentence characteristic matching degree model. Subsequently, when the preset sentence characteristic matching degree model is used, the co-occurrence probability of n words in the comment sentences is determined by matching the pre-calculated result; and determining the confusion degree of the comment sentence as the matching degree evaluation value of the comment sentence according to the co-occurrence probability of all the n words of the comment sentence.

According to different linguistic data related to the preset sentence characteristics, different preset sentence characteristic matching degree models can be obtained. For example, the first preset sentence feature matching degree model obtained based on the corpus of the da-ren article can determine the matching degree between the comment sentence and the first preset sentence feature (e.g., the da-ren article feature), for example, the matching degree in terms of word speciality, syntax, fluency, and the like. The second preset sentence characteristic matching degree model obtained based on the selling point data corpus can judge the matching degree between the comment sentence and the second preset sentence characteristic (such as the selling point data characteristic), for example, the matching degree in terms of word speciality and the like.

The n-gram model is a statistical model used to evaluate whether the prediction is reasonable. Assuming that we have a sequence of m words and wi is the ith word, the probability p (w1, w2, w3,.., wm) p (w1) × p (w2| w1) × p (w3| w1, w2).. p (wm | w1, w 2.., wm-1) is calculated according to the chain rule, but this probability value is difficult to calculate. Using the Markov chain assumption, the computational overhead can be greatly reduced assuming that each word is only related to the n words ahead of him. This embodiment uses, for example, 3-gram, formula is 1-gram when n is 1, 2-gram when n is 2, and so onDue to p (w)i|wi-1,wi-2,wi-3) Values are generally less than 1, and multiplying results in a smaller and smaller score, so that two sides are logged, which becomes And the calculation is convenient.

However, n-grams have a problem with logp (w) when a word occurs less often, or even noti|wi-1,wi-2,wi-3) The value of (a) is negative infinity so that the score of the whole sentence becomes negative infinity, and it is expected that the evaluation value can have a reasonable value when some words appear less or do not appear, and therefore, the n-gram model needs to be smoothed. The Smoothing method used in this embodiment is Modified Kneser Ney Smoothing, which is Kneser NeyThe improvement of smoothening can be specifically referred to the related art, and is only briefly described below.

The formula of the Kneserney smoothening before improvement is as follows:

whereinIs (w)i-n+1,...,wi) Statistics of phrases (frequency of occurrence),areAnd when the number is negative, 0 is taken to prevent negative values from occurring. Gamma is a constant value for the regularization,

wherein Presentation statisticsThe number of samples of (1).

When in useWhen this is the case, a back-off method or an interpolation method may be used. Wherein, the formula of interpolation method is: if it is not Wherein the content of the first and second substances,

the Modified Kneser Ney smoothening mainly lies in the improvement of d, and the final improved formula is as follows:

wherein the content of the first and second substances,

wherein n is1Denotes the total number of occurrences of n-grams of 1, n2Denotes the total number of occurrences of n-grams of 2, n3Denotes the total number of occurrences of n-grams of 3, n4Representing the total number of occurrences of the n-gram of 4, and so on, c representing the number of occurrences of the phrase.

Probability of co-occurrence P based on n words in comment sentenceKNFinally, the result is obtainedThe confusion is used as the matching degree evaluation value of the comment sentence.

Where n is the number of n-grams (e.g., the number of 3-grams) of the comment statement sensor.

According to the difference of the training corpus, the evaluation value of the matching degree of the comment sentence is, for example: an evaluation value of a degree of matching between the comment sentence and a first preset sentence feature (such as a hit-to-human-chapter feature), or an evaluation value of a degree of matching between the comment sentence and a second preset sentence feature (such as a point-of-sale data feature).

At step 230, the model deploys.

Various service evaluation models are obtained by training before deployment, and the models comprise: one or more of a similarity model, a positive and negative evaluation model, a service category evaluation model, a first preset sentence feature (e.g., a reach seal feature) matching degree model, a second preset sentence feature (e.g., a selling point data feature) matching degree model, and the like.

At step 240, the comment statement is extracted and presented, including, for example, steps 241-.

In step 241, a piece of comment information to be processed is obtained, and the piece of comment information (also called a comment long sentence) is segmented into a plurality of comment sentences (also called comment short sentences).

The set C of the plurality of comment sentences is (s1, s2, s3, …, sn), and si represents each comment sentence obtained by segmentation.

In step 242, a positive and negative evaluation model is called to determine a positive and negative evaluation value (set as emotion _ score) for each comment sentence to characterize whether the comment sentence is positive, negative or neutral.

And inputting each comment sentence into the positive and negative surface evaluation model, and acquiring the positive and negative surface evaluation value of each comment sentence output by the positive and negative surface evaluation model.

Among them, the evaluation value of the positive comment sentence is, for example, 0.2 point, the evaluation value of the negative comment sentence is, for example, -0.2 point, and the evaluation value of the neutral comment sentence is, for example, 0 point.

In step 243, a service category evaluation model is called to determine a service category evaluation value (set as service _ score) of each comment statement to characterize the service category to which the comment statement belongs, such as a service category or a non-service category, and the service category may be further subdivided into sub-service categories of price, installation, packaging, delivery service, after-sales service, and the like.

And inputting each comment sentence into the service category evaluation model, and acquiring the service category evaluation value of each comment sentence output by the service category evaluation model.

The evaluation value of the comment sentence of the non-service category is, for example, 0.2 point, and the evaluation value of the comment sentence of the service category is, for example, -0.2 point.

In step 244, a first preset sentence feature matching degree model is called, and a matching degree evaluation value (set as mar _ score) of a first preset sentence feature (such as a hit sentence feature) of each comment sentence is determined.

In the training, the improved Kneser Ney smooth n-gram language model is used for pre-calculating the co-occurrence probability of n words in the corpus (such as a darner article) related to the first preset sentence characteristic, and the pre-calculation results can be regarded as a first preset sentence characteristic matching degree model. Subsequently, calling a first preset sentence characteristic matching degree model, matching n words in the comment sentences with n words in the pre-calculation result, and determining the co-occurrence probability of the n words in the comment sentences based on the pre-calculation result; according to the co-occurrence probability of all the n words of the comment sentence, the confusion of the comment sentence is determined as the matching degree evaluation value of the first preset sentence characteristic (such as the character of the humanity) of the comment sentence.

In step 245, a second preset sentence characteristic matching degree model is called, and a matching degree evaluation value (set to selpoint _ score) of a second preset sentence characteristic (such as a selling point data characteristic) of each comment sentence is determined.

In the training, the improved Kneser Ney smooth n-gram language model is used for pre-calculating the co-occurrence probability of n words in the corpus (such as selling point data) related to the second preset sentence characteristic, and the pre-calculation results can be regarded as a second preset sentence characteristic matching degree model. Subsequently, calling a second preset sentence characteristic matching degree model, matching n words in the comment sentences with n words in the pre-calculation result, and determining the co-occurrence probability of the n words in the comment sentences based on the pre-calculation result; and determining the confusion degree of the comment sentence as a matching degree evaluation value of a second preset sentence characteristic (such as a selling point data characteristic) of the comment sentence according to the co-occurrence probability of all the n words of the comment sentence.

At step 246, for each comment sentence, the above-described various evaluation values of the comment sentence are subjected to weighted sum calculation, and the weighted sum calculation result is set as the final evaluation value (final _ score) of the comment sentence.

fine _ score is a × observation _ score + b × service _ score + c × mar _ score + d × self point _ score, where a, b, c, d are the weights of the four evaluation values, for example, a is 0.1, b is 0.1, c is 0.3, and d is 0.5, but not limited to the illustrated example.

In step 247, preset words appearing in the comment sentence are filtered using the preset words for filtering in the filtering thesaurus, or the comment sentence containing the preset words is filtered. So as to improve the smoothness of the subsequently extracted comment sentences.

The preset words for filtering in the filtering word bank comprise: at least one of a context turning word, a negative word, a marketing word, and other special preset words.

Context-turning words include, for example, but are not limited to: "whether," "because," "if," "desired," "in any case," "also," "but," "however," "nevertheless," "though," "only," "but," "not," "at all," "competitive," "biased," "resulting," "unfortunately," "yet," "so," "in fact," "all," "at the same time," "critical," "though," and the like.

Negative words include, for example, but are not limited to: nausea, nonuse, disuse, bad feeling, etc.

Marketing words and other special preset words include, but are not limited to: "self-operation", "self-selection", "special operation", "special use", "crowd sourcing", "lifelong", "quality inspection", "quality assurance", "direct operation", "direct marketing", "direct descending", "direct supply", "genuine product", "normal form", "discount", "pre-sale", "special offer", "quick rob", etc.

In step 248, a similarity model is called, the similarity between every two comment sentences is determined, and for the two comment sentences with the similarity greater than the similarity threshold, one comment sentence with a lower evaluation value is filtered.

In step 249, a knapsack problem solution method is used, the information display length on a display device (such as a mobile device) is taken as the size of a knapsack, the total value of the business evaluation values of all extracted comment sentences is taken as the value of the knapsack, and at least one comment sentence is extracted from all the remaining comment sentences, so that the total value of the business evaluation values of all the extracted comment sentences is the maximum under the condition that the total information length does not exceed the information display length.

All extracted comment sentences liThe comment sentence sets L ═ are obtained by arranging the comment sentence sets L ═ according to the positions of the comment sentence sets in the original comment information (L1, L2.., L)i,.., ln), and each liFinal evaluation value f ofiSet F ═ F1, F2i,...,fn)。

Define the sub-problem S (i, W) as: selecting a sentence set with the total length not exceeding W from the first i comment sentences, wherein each comment sentence can be selected only once, so that the total score of the sentence set is the maximum, and the optimal value is recorded as s (i, W), wherein i is more than or equal to 1 and less than or equal to n, n represents the number of extracted comment sentences, W is more than or equal to 1 and less than or equal to C, C is the information display length (namely, the maximum total length of the extracted comment sentences), and W is the current weekly length.

Considering the ith comment sentence, there are only two possibilities, select and deselect.

If the ith comment statement is not selected, the question is converted into: and selecting a statement set with the length not exceeding W from the first i-1 short sentences, wherein the state is S (i-1, W).

If the ith comment statement is selected: the problem turns into: selecting short sentences with length not more than W-l from the first i-1 short sentencesiThe state is S (i-1, W-l)i)。

For the above two possibilities, on the premise of not exceeding the backpack size, the decision whether to select the ith comment sentence is made with the maximum backpack value as the target, and the formula is as follows:

S(i,W)=max{S(i-1,W),fi+S(i-1,W-li)}。

according to the embodiment, similar sentences are filtered out aiming at a plurality of original comment sentences, and part of comment sentences meeting the business evaluation requirement are extracted for display, so that the information display requirement under the scene with limited information display space is met, a user can quickly and effectively obtain information, and compared with a mode of extracting comment sentences according to keywords, the method has no limitation of the keywords, and the displayed information is good in diversity.

Fig. 6 is a schematic structural diagram of a comment information display apparatus according to some embodiments of the present disclosure.

As shown in fig. 6, the comment information presentation apparatus 600 of this embodiment includes: a memory 610 and a processor 620 coupled to the memory 610, wherein the processor 620 is configured to execute the comment information presentation method in any of the foregoing embodiments based on instructions stored in the memory 610.

For example, a piece of comment information to be processed is obtained, and the comment information is segmented into a plurality of comment sentences; determining a service evaluation value of each comment statement according to a preset service evaluation item; determining the similarity between every two comment sentences in the comment sentences, and filtering one comment sentence with a lower service evaluation value for the two comment sentences with the similarity greater than a similarity threshold value; extracting at least one comment statement from all the reserved comment statements, so that the total value of the business evaluation value of all the extracted comment statements is the largest under the condition that the total information length does not exceed the information display length; and displaying all extracted comment sentences.

Memory 610 may include, for example, system memory, fixed non-volatile storage media, and the like. The system memory stores, for example, an operating system, an application program, a Boot Loader (Boot Loader), and other programs.

The apparatus 600 may also include an input-output interface 630, a network interface 640, a storage interface 650, and the like. These interfaces 630, 640, 650 and the connections between the memory 610 and the processor 620 may be, for example, via a bus 660. The input/output interface 630 provides a connection interface for input/output devices such as a display, a mouse, a keyboard, and a touch screen. The network interface 640 provides a connection interface for various networking devices. The storage interface 650 provides a connection interface for external storage devices such as an SD card and a usb disk.

Fig. 7 is a schematic structural diagram of a comment information display apparatus according to another embodiment of the present disclosure.

As shown in fig. 7, the comment information presentation apparatus 700 of this embodiment includes:

an information obtaining and segmenting module 710 configured to obtain a piece of comment information to be processed and segment the comment information into a plurality of comment sentences;

an evaluation value determining module 720 configured to determine a traffic evaluation value of each comment sentence according to a preset traffic evaluation item;

the filtering module 730 is configured to determine similarity between every two comment sentences in the plurality of comment sentences, and filter out one comment sentence with a lower service evaluation value for the two comment sentences of which the similarity is greater than a similarity threshold;

a sentence extraction module 740 configured to extract at least one comment sentence from all the remaining comment sentences so that the total value of the business evaluation value is the largest when the total information length of all the extracted comment sentences does not exceed the information presentation length;

and a presentation module 750 configured to present all extracted comment sentences.

In some embodiments, the filtering module 730 is further configured to filter the preset words appearing in the comment sentences or filter the comment sentences containing the preset words by using the preset words for filtering in the filtering word bank.

The filtering module 730 determining the similarity between each two comment sentences of the plurality of comment sentences includes: and inputting any two comment sentences in the comment sentences into a similarity model, and obtaining the similarity of the any two comment sentences output by the similarity model, wherein the similarity model is obtained by training a twin neural network.

The comment information presentation apparatus 700 further includes: a training module 760.

The training module 760, training the twin neural network, comprises: aiming at a plurality of groups of training samples, respectively inputting a first training sentence and a second training sentence in each group of training samples into a first neural network and a second neural network of a twin neural network to obtain a vector of the first training sentence output by the first neural network and a vector of the second training sentence output by the second neural network; calculating a loss value according to a loss function, updating parameters of the twin neural network according to the loss value, and taking the trained twin neural network as a similarity model, wherein the loss function is constructed according to similarity marking information between a first training statement and a second training statement of each of the plurality of groups of training samples and distance information between a vector of the first training statement and a vector of the second training statement.

An evaluation value determining module 720 configured to determine that the traffic evaluation value of each comment sentence includes one or more of: determining positive and negative evaluation values of each comment statement; determining a service category evaluation value of each comment sentence; and determining the evaluation value of the matching degree of the preset sentence characteristics of each comment sentence.

The evaluation value determining module 720 determines the positive and negative evaluation values of each comment sentence by: and inputting each comment sentence into a positive and negative surface evaluation model, and acquiring the positive and negative surface evaluation value of each comment sentence output by the positive and negative surface evaluation model.

The evaluation value determining module 720 determines the service category evaluation value of each comment sentence by: and inputting each comment sentence into a service category evaluation model, and acquiring the service category evaluation value of each comment sentence output by the service category evaluation model.

Wherein the positive and negative evaluation model and the service class evaluation model are obtained by performing multi-task training on a first convolutional neural network and a second convolutional neural network.

When the training module 760 performs multitask training, it includes: the method comprises the steps of sharing parameters of a word embedding layer, a convolution layer and a pooling layer of a first convolution neural network and a second convolution neural network, determining a loss function of multitask training according to a weighted sum of a first loss function of the first convolution neural network and a second loss function of the second convolution neural network, determining the first loss function according to the total number of categories of the first convolution neural network and values of the categories, and determining the second loss function according to the total number of categories of the second convolution neural network and the values of the categories.

The evaluation value determination module 720 determines the evaluation value of the degree of matching of the preset sentence features of each comment sentence by: pre-calculating the co-occurrence probability of n words in the corpus related to the preset sentence characteristics by utilizing an improved Kneser Ney smooth n-gram language model; determining the co-occurrence probability of n words in each comment sentence by matching the pre-calculated results; and determining the confusion degree of each comment sentence according to the co-occurrence probability of all the n words of each comment sentence, and using the confusion degree as a matching degree evaluation value.

The determination of the traffic evaluation value of each comment sentence by the evaluation value determination module 720 includes: when there are a plurality of business evaluation items, weighted summation calculation is performed on the various evaluation values of each comment sentence, and the result of weighted summation is taken as the business evaluation value of each comment sentence.

The sentence extraction module 740 is configured to extract at least one comment sentence from all the remaining comment sentences, including: by utilizing a knapsack problem solving method, the information display length is used as the size of a knapsack, the total value of the business evaluation values of all extracted comment sentences is used as the value of the knapsack, and at least one comment sentence is extracted from all the reserved comment sentences, so that the total value of the business evaluation values of all the extracted comment sentences is the largest under the condition that the total information length does not exceed the information display length.

The filtering module 730 is further configured to filter the preset words appearing in the comment sentences or filter the comment sentences containing the preset words by using the preset words for filtering in the filtering word bank.

Some embodiments of the present disclosure also provide a non-transitory computer readable storage medium having stored thereon a computer program, which when executed by a processor, performs the steps of the comment information presentation method.

As will be appreciated by one skilled in the art, embodiments of the present disclosure may be provided as a method, system, or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more non-transitory computer-readable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer program code embodied therein.

The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.

These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.

These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.

The above description is only exemplary of the present disclosure and is not intended to limit the present disclosure, so that any modification, equivalent replacement, or improvement made within the spirit and principle of the present disclosure should be included in the scope of the present disclosure.

28页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种文本摘要生成方法和装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!