Selecting answer spans from electronic documents using machine learning

文档序号:1618502 发布日期:2020-01-10 浏览:6次 中文

阅读说明:本技术 使用机器学习从电子文档选择回答跨距 (Selecting answer spans from electronic documents using machine learning ) 是由 T.M.奎亚特科夫斯基 A.P.帕里克 S.斯瓦亚姆蒂普塔 于 2018-10-29 设计创作,主要内容包括:一种包括在计算机储存介质上编码的计算机程序的方法、系统和设备,以用于从输入电子文档选择回答输入问题的文本跨距。方法之一包括在输入文档中获得文本跨距的相应第一数值表示;对于文本跨距中的每一个:对于包含文本跨距的分段,确定问题-意识分段向量,对于问题,确定分段-意识问题向量,以及使用第二前馈神经网络处理文本跨距的第一数值表示、问题-意识分段向量和分段-意识问题向量以生成文本跨距的第二数值表示;对于多个文本跨距的每个唯一文本跨距:确定唯一文本跨距的聚合表示,且从聚合表示确定唯一文本跨距的最终得分;以及选择唯一文本跨距。(A method, system, and apparatus, including a computer program encoded on a computer storage medium, for selecting a text stride from an input electronic document that answers an input question. One of the methods includes obtaining a respective first numerical representation of a text stride in an input document; for each of the text spans: for a segment containing a text stride, determining a problem-aware segment vector, for a problem, determining a segment-aware problem vector, and processing the first numerical representation of the text stride, the problem-aware segment vector, and the segment-aware problem vector using a second feed-forward neural network to generate a second numerical representation of the text stride; for each unique text span of the plurality of text spans: determining an aggregate representation of the unique text stride, and determining a final score for the unique text stride from the aggregate representation; and selecting a unique text span.)

1. A computer-implemented method of selecting a text span from an input electronic document that answers an input question that includes a plurality of question marks, the method comprising:

obtaining a respective first numerical representation of each of a plurality of text spans in the input document;

for each of the plurality of text spans:

for a segment of the input document that contains the text stride, determining a problem-aware segment vector based on a similarity between the problem label in the problem and a segment label in a segment that contains the text stride,

for the question, determining a segment-aware question vector for the question also based on a similarity between the question label in the question and a segment label in a segment containing the text stride, and

processing the first numerical representation of the text stride, the problem-aware segment vector, and the segment-aware problem vector using a second feed-forward neural network to generate a second numerical representation of the text stride; for each unique text span of the plurality of text spans:

determining an aggregate representation of the unique text span from a second numerical representation of the text span corresponding to the unique text span, and

determining a final score for the unique text stride from the aggregate representation, the final score measuring how well the unique text stride answers the question; and

selecting the unique text span with the highest final score as the answer to the question.

2. The method of claim 1, further comprising:

in response to the question, outputting the selected unique text span.

3. The method of claim 2, wherein the question is received as a speech input, and wherein outputting the unique text span comprises:

outputting a spoken utterance of the text span as part of a response to the question.

4. The method of any of claims 1-3, wherein determining the aggregate representation of the unique text stride comprises:

processing the second numerical representation of each of the text spans corresponding to the unique text span using a third feed-forward neural network to generate a respective converted numerical representation of each of the text spans; and

determining the aggregate representation by summing the converted numerical representations.

5. The method of any of claims 1-4, wherein determining a final score for the unique text stride comprises:

processing the aggregate representation of the unique text spans using a linear prediction layer to generate the final score.

6. The method of any of claims 1-5, wherein determining the problem-aware segmentation vector for a segment of the input document that includes the text stride comprises:

determining a respective companion vector for each segmentation token, the companion vector indicating a similarity of the segmentation token and the problem token; and

determining the problem-aware segmentation vector from the companion vector of the segmentation markers.

7. The method of any of claims 1-6, wherein determining the segment-aware problem vector for the problem comprises:

determining a corresponding companion vector for each question mark, the companion vector measuring the similarity of the question mark and the segment mark; and

determining the segment-aware problem vector from the problem token's companion vector.

8. The method of any of claims 1-7, wherein the second numerical representation is an output of a last hidden layer in the second feedforward neural network.

9. The method of any of claims 1-8, wherein obtaining a first numerical representation for each of the plurality of text strides in the input document comprises, for each text stride:

obtaining an initial representation of the text span based on the tags in the text span;

obtaining an initial representation of the problem based on the problem indicia; and

determining a question-span representation of the text span from the initial representation of the text span and the initial representation of the question.

10. The method of claim 9, wherein obtaining a respective first numerical representation for each of the plurality of text spans in the input document comprises, for each text span:

obtaining an initial representation of a left context of the text stride in the document based on the markup in the left context of the text stride;

obtaining an initial representation of a right context of the text stride in the document based on the marker in the right context of the text stride;

and

determining a span-context representation of the text span from the initial representation of the text span and the initial representations of the left context and the right context.

11. The method of claim 10, wherein the first representation is a concatenation of the problem-span representation and the span-context representation.

12. The method of any of claims 9-11, wherein the initial representation of the text span is a pocket in which the tagged word is embedded in the text span.

13. The method of any of claims 9-11, wherein the initial representation of the text span is a concatenation of a pocket embedded by the tagged words in the text span and a question-word feature indicating whether the text span contains any of the question tags.

14. A system comprising one or more computers and one or more storage devices storing instructions that, when executed by the one or more computers, cause the one or more computers to perform the operations of the respective methods in any one of claims 1-13.

15. A computer storage medium storing instructions that, when executed by one or more computers, cause the one or more computers to perform the operations of the respective method in any one of claims 1-13.

Background

The present description relates to processing electronic documents using machine learning models such as neural networks.

An electronic document may be any of a variety of documents that may be saved in electronic form and viewed by a user on a computer, such as a web page, word processing document, text document, spreadsheet, and the like.

Neural networks are machine learning models that employ one or more layers of nonlinear units to predict an output for a received input. Some neural networks include one or more hidden layers in addition to an output layer. The output of each hidden layer serves as input to the next layer in the network (i.e., the next hidden layer or output layer). Each layer of the network generates an output from the received input in accordance with current values of the respective set of parameters.

Some neural networks are recurrent neural networks. A recurrent neural network is a neural network that receives an input sequence and generates an output sequence from the input sequence. In particular, the recurrent neural network may use some or all of the internal state of the network from the previous time step for calculating the output at the current time step.

Disclosure of Invention

This specification describes a system, implemented as a computer program on one or more computers in one or more locations, that selects a text span from an input electronic document that answers an input question, the input question including a plurality of question marks.

The subject matter described in this specification can be implemented in particular embodiments to realize one or more of the following advantages.

By employing lightweight (i.e., computationally efficient) models combined in a cascade to find answers to input questions, the described system can efficiently locate text in an input document that answers the input questions. In particular, the described system may be competent for more complex, less computationally efficient architectures. Thus, the described system may efficiently answer received questions while consuming less computing resources, e.g., less memory and less processing power, than conventional approaches, which is particularly advantageous when the system is implemented in resource-constrained environments (e.g., on mobile devices). In particular, the system may obtain up-to-date results for many question-answering tasks to process document tagging, question tagging, or both, although consuming less computing resources than previous advanced systems (e.g., systems using computationally intensive recurrent neural networks).

The details of one or more embodiments of the subject matter in this specification are set forth in the accompanying drawings and the description below. Other features, aspects, and potential advantages of the subject matter will become apparent from the description, the drawings, and the claims.

Drawings

FIG. 1A illustrates an example question-answering system.

Fig. 1B illustrates an example architecture of a cascaded machine learning system.

Fig. 2 is a flow diagram of an example process of training a cascaded machine learning system.

FIG. 3 is a flow diagram of an example process for selecting an answer span from an input document.

Like reference numbers and designations in the various drawings indicate like elements.

Detailed Description

This specification generally describes a system for selecting a text span (text span) from an electronic document that answers a received question. A text stride is a sequence of one or more consecutive words in an electronic document.

Once the system has selected a text stride as an answer to the question, the system (or other system) may output the selected text stride as part of a response to the question.

For example, the input question may have been submitted as a voice query, and the system may provide a spoken utterance of the selected text span as part of a response to the query. As a particular example, a mobile device, smart speaker, or other computing device that interacts with a user using voice input may receive a voice query spoken by the user and transmit the received query to the system, such as over a data communications network. The system may then identify candidate electronic documents that may contain answers to the received query, select a text span from the documents using the techniques described in this specification, and then transmit the text span to the computing device as part of a response to the voice query, i.e., as data representing a spoken utterance of the text span or as text converted to speech at the computing device. In some cases, the user may explicitly or implicitly identify candidate documents. For example, if a user has submitted a voice query while viewing a given document using a computing device, the system may identify the given document as a candidate electronic document. In some other cases, an external system (such as an internet search engine) identifies candidate electronic documents in response to a query and provides the candidate electronic documents to the system.

As other examples, the system may receive questions as a text query and may provide a text span for presentation on the user device as part of a response to the text query. For example, an internet search engine may receive a text query and may include a text stride identified by the system as part of a response to the search query, such as a formatted representation of the content and search results identified by the internet search engine as a response to the query.

FIG. 1A illustrates an example question-answering system 100. The question-answering system 100 is an example of a system implemented as a computer program on one or more computers in one or more locations in which the systems, components, and techniques described below are implemented.

As described above, the system 100 receives an input question 102 and an input electronic document 104, and identifies text spans 152 from the electronic document 104 that the system has determined to provide answers to the input question 102. In particular, both the input question 102 and the electronic document 104 are tokenized, i.e., such that the text of both the input question 102 and the electronic document 104 are represented as respective sets of tokens (tokens). The tokens may be, for example, words, phrases or other tuples (n-grams) selected from a vocabulary of possible tokens.

When an electronic document 104 is received, the system 100 identifies candidate text spans in the document. For example, the system 100 may identify each possible continuous sequence of one or more tokens in the document as a candidate text stride that includes less than a threshold number of tokens.

Because the same candidate text span may occur multiple times throughout the electronic document, the system 100 also identifies a set of unique text spans (unique text spans) from the candidate text spans in the document, i.e., such that any text span in the set of unique text spans does not correspond to any other text span in the unique text span. As one example, if two text spans are within a threshold edit distance of each other, the system 100 may consider one text span to correspond to another text span. As another example, the system 100 may consider two text spans to correspond if they are determined by the named entity recognition system to refer to the same entity.

The system 100 then uses the cascaded machine learning system 110 (i.e., a machine learning system having a cascaded model architecture) to select a text stride from the set of unique text strides as the text stride 152 to answer the input question.

The cascaded model structure has three levels of machine learning models: level 1120, level 2130, and level 3140. The architecture is referred to as a "cascade" because the model(s) in each layer of the cascade receive as inputs the outputs of the model(s) in the previous layer of the cascade. The model(s) in the final layer of the cascade (i.e., layer 3) generate the final prediction of the machine learning system 110 from the output of the model in the previous layer (i.e., layer 2).

More specifically, cascading tier 1 operates on simple features of the problem and the candidate text spans to generate a respective first numerical representation 122 for each text span. The numerical representation is an ordered set of numerical values, such as floating point values or vectors, matrices, or higher order tensors of quantized floating point values.

In particular, the model(s) in level 1 operate only on embedding (embedding) from a pre-trained label-embedded dictionary and optionally binary question word features indicating whether a given stride contains labels from questions. Embedding is a vector of values in a fixed dimensional space. Because the embedding has been pre-trained, the positions embedded in the fixed dimensional space reflect the similarity, e.g., semantic similarity, between the labels they represent. As one example, in a fixed dimensional space, the embedding of the word "king" may be closer to the embedding of the word "queen" than the embedding of the word "pawn (town)". Examples of such pre-trained embedding that may be used by system 100 include word2vec embedding and GloVe embedding.

The model in tier 2 of the cascade uses the first numerical representation 122 generated by tier 1 and an attention mechanism that aligns (align) the question mark with a mark in the document segment containing the candidate span (e.g., a sentence, paragraph, or other group of marks in an electronic document that contains the candidate span) for each candidate span to generate a respective second numerical representation 132 for each candidate answer span.

The model in tier 3 receives the second numerical representation 132 of the candidate text stride and aggregates information from all candidate answer strides that are mentioned multiple times in the document (i.e., occur multiple times throughout the document) to determine a respective final score 142 for each unique text stride. The final score 142 for a given unique text stride measures how well the unique text stride answers the question.

The operation of the cascaded machine learning system 110 will be described in more detail below with reference to fig. 1B and 3.

The system 100 then selects a text stride 152 from the unique text strides based on the final score. For example, the system 100 may select the unique text span with the highest final score as the answer to the question.

To allow the cascaded machine learning system 110 to efficiently score answer spans, i.e., so that the final scores generated by layer 3 of the cascade can be used to accurately identify answers to input questions, the system 100 trains machine learning models in the cascade on training data that includes labeled (labeled) training examples. In other words, each labeled training example includes a question-electronic document pair labeled with data identifying the correct text span (i.e., the text span of the best-answer question from the electronic document). Training the machine learning models in the cascade on this training data is described in more detail below with reference to fig. 1B and 2.

Fig. 1B illustrates an example architecture of a cascaded machine learning system 110.

As shown in fig. 1B, level 1 of the cascade includes two models that together generate a first numerical representation: span + short context model 160 and problem + span model 170.

For any given text span, model 160 operates on: (i) an initial representation 154 of a left context (context) of a text stride in the input document, (ii) an initial representation 156 of a text stride, and (iii) an initial representation 158 of a right context of a text stride in the input document to generate a stride-context representation 162 of a text stride as output.

An initial representation of the text span is generated based on a pre-trained embedding of the tags in the text span. In some implementations, the initial representation of a text span is a bag of word embeddings (bag of word embeddings) of tokens in the text span, i.e., an average of embeddings of tokens in the text span. In some other implementations, the initial representation of the text span is a concatenation of a bag in which the tagged words in the text span are embedded and a question-word feature indicating whether the text span includes any question tags (collocation). The question-word feature may be a binary feature, for example, having a value of 1 when the text span includes one or more question marks, and a value of 0 when the text span does not include any question marks.

The initial representation of the left context is the bag of tagged word embeddings in the left context of the text span, i.e., the average of the embeddings of the leftmost K tags of the text span in the input document.

Similarly, the initial representation of the right context is a bag of tagged word embeddings in the right context of the text span, i.e., the average of the embeddings of the rightmost K tags of the text span in the input document.

To generate a span-context representation of the text span, model 160 uses a feed-forward neural network to handle the concatenation of: (i) an initial representation of a left context of a text stride in the input document, (ii) an initial representation of a text stride, and (iii) an initial representation of a right context of a text stride in the input document. In some implementations, the neural network is a two-layer feedforward neural network with modified linear unit (ReLU) activation. In particular, in these implementations, the operations performed by the feed-forward neural network to generate the representation h from the input x may be expressed as:

h=ffnn(x)

=ReLU{U{ReLU{Vx+a}}+b},

where the U and V parameter matrices and a and b are the parameter deviations of the feed forward network.

Although not used during inference, during training, the model 160 is further configured to generate a score (e.g., a final score) for a text stride that measures how well a unique text stride answers a question (as shown in FIG. 1B as being a term of loss l2Input of (2). In particular, the model 160 may generate a score by processing the span-context representation 162 of the text span by a linear prediction layer that maps vectors to single values. In particular, the operation performed by the linear prediction layer to generate a value from the input representation h

Figure BDA0002295860300000051

Can be expressed as:

Figure BDA0002295860300000061

where w and z are parameters of the linear prediction layer.

The use of scores generated by the model 160 for training will be described in detail below.

For any given text span, model 170 operates on (i) the initial representation of text span 156 and (ii) the initial representation of question 164 to generate a question-span representation 172 of the text span.

In particular, the model 170 first generates a weight for each of the issue tokens based on the embedding of each issue token.

The model 170 may generate weights for the problem markers by first applying other feed-forward neural networks (i.e., applying the FFNN operation) to the embedding of the problem markers to generate an initial representation of the problem markers, and then applying another linear prediction layer to the initial representation of the problem markers.

The model 170 may then generate an initial representation of the issue tokens by computing a weighted average of the embeddings of the issue tokens, where the embeddings of each issue token are weighted by a normalized version of the computed weights.

Once the initial representation of the problem has been generated, the model 170 generates a problem-span representation of the text span by applying other feed-forward neural networks (i.e., applying the FFNN described above) to the concatenation of the initial representation of the text span and the initial representation of the problem.

Although not used during inference, during training, the model 170 is also configured to generate a score (such as a final score) for a text stride that measures how well a unique text stride answers a question. In particular, the model 170 may generate scores by processing the problem-span representation of text spans by other linear prediction layers.

The first numerical representation 122 of the text span is a concatenation of the question-span representation and the span-context representation and is set as output to the cascaded level 2.

Level 2 of the cascade includes a contextual attention model 180 that operates on the first numerical representation 122 for a given problem span to generate a second numerical representation 132 of the problem span.

For a given text stride, the model 180(i) generates a question-aware segment vector 166 based on the similarity between the question mark in the question and the segment mark in the segment containing the text stride for the segment containing the text stride in the input document, and (ii) generates a segment-aware question vector 168 for the question based on the similarity between the question mark in the question and the segment mark in the segment containing the text stride for the question as well.

To generate these two vectors, model 180 measures the similarity between each pair of question and segment embeddings, i.e., generates a respective similarity score between each question embeddings and each segment embeddings. Embedding q in order to generate a given problemiSegment embedding djSimilarity score η of pairsijModel 180 performs the following operations:

ηij=ffnn(qi)Tffnn(dj)。

to generate a problem-aware segment vector for segments of the input document that contain text strides, the model 180 then determines, for each segment marker, a corresponding adjoined vector (adjoined vector) that accounts for the similarity of the segment marker to the problem marker as reflected by the similarity score, and determines a problem-aware segment vector from the adjoined vectors of the segment markers.

To determine a segment-aware problem vector for the problem, the model 180 determines, for each problem token, a corresponding companion vector that measures the similarity of the problem token and the segment token as reflected by the similarity score, and determines a segment-aware problem vector from the companion vector of the problem token.

In particular, to generate problem-aware segment vectors, each original segment-embedded vector and its corresponding companion vector are spliced and passed through other feed-forward networks, and the representations generated by the networks are summed to obtain a problem-aware segment vector. Similarly, each original problem embedding vector and its corresponding companion vector are spliced and passed through the feed-forward network, and the representations generated by the network are summed to obtain a segment-aware problem vector.

The model 180 then processes the first numerical representation of the first text stride, the problem-aware segment vector, the segment-aware problem vector, and optionally the concatenation between the problem-stride features using other feed-forward neural networks to generate a second numerical representation of the text stride.

Although not used during inference, during training, the model 180 is also configured to generate a score (such as a final score) for a text stride that measures how well a unique text stride answers a question. In particular, model 180 may generate a score by processing a second numerical representation of the text stride by other linear prediction layers.

Level 3 includes an aggregate multiple mentions model 190, which model 190 receives the second numerical value representation 132 of the candidate answer stride and aggregates information for all candidate answer strides that occur multiple times from the entire document based on the second numerical value representation 132.

In particular, for each unique problem span, the model 190 processes the second numerical representation of each text span corresponding to the unique text span using other feed-forward neural networks to generate a corresponding converted numerical representation of each text span. The model 190 then determines an aggregate representation of the unique text stride by summing the converted numerical representations of the candidate text strides corresponding to the unique text stride.

The model 190 then generates a final score 142 for the unique text stride by processing the aggregate representation of the unique text stride by other linear prediction layers.

Although the architecture of the various feed-forward neural networks and the various linear projection layers employing the models 160-190 is generally the same, each feed-forward neural network and linear projection layer generally has different parameter values than the other neural networks or projection layers. To determine these parameter values, the system 100 trains the cascaded machine learning system 110 on training data.

FIG. 2 is a flow diagram of an example process 200 for training a cascaded neural network system. For convenience, process 200 will be described as being performed by a system of one or more computers located at one or more locations. For example, a suitably programmed question-answering system (such as question-answering system 100 of FIG. 1) may perform process 200.

The system may repeatedly perform the process 200 for multiple training examples to repeatedly update parameter values for the cascaded neural network system.

The system obtains training examples (step 202). The training examples include training questions and training documents, and identify correct word spans for best-answer questions from the training documents.

The system processes the training questions and training documents using a cascaded neural network system to generate (i) a final score for the unique word span corresponding to the correct word span, and (ii) a score for each of the mentioned models 160 and 180 for the correct word span in the training documents (step 204).

In particular, as described above, although only the final score is used to select the best answer to the input question after training, during training, each of the models 160-180 is configured to generate a respective score for each of the candidate word spans in the training document.

The system determines updates to parameters of the cascaded machine learning system by determining gradients of the loss functions with respect to the parameters (step 206). As can be seen in the example of FIG. 1B, the loss function l includes terms l that each depend on a score generated by a corresponding one of the models 160 and 1901、l2、l3And l4. In particular, the loss function includes, for each of the models 160-180, a respective loss term that depends on the score assigned to the mention of the correct word span in the training document, and for the model 190, a loss term that depends on the final score assigned by the model 190 to the unique word span corresponding to the correct word span.

In particular, the loss function may be the total negative log likelihood of correct answer spans under all sub-models 160-190. For example, the loss function may be expressed as:

Figure BDA0002295860300000081

where each λ is a hyper-parameter, such that λ is increased by 1, S is trainingSet of all references in a document to correct answer spans, p(k)(sqq, d) is the score of the mentioning S in the set S by the kth model of models 160-180, and p(4)(u | q, d) is the final score assigned by the model 190 to the unique answer span.

The system can determine the gradient relative to each parameter using machine learning training techniques (e.g., back propagation), and can then determine updates from the gradient by applying update rules to the gradient (e.g., ADAM update rules, rmsprop update rules, or random gradient descent learning rates).

FIG. 3 is a flow diagram of an example process 300 for selecting a text span from an electronic document in response to a question. For convenience, process 300 will be described as being performed by a system of one or more computers located at one or more locations. For example, a suitably programmed question-answering system (such as question-answering system 100 of FIG. 1) may perform process 100.

The system obtains a respective first numerical representation for each of a plurality of text spans in the input document (step 302). For example, the system may generate the respective first numerical representations using level 1 of the cascaded machine learning system as described above.

For each of a plurality of text spans, the system determines a respective second numerical value representation (step 304). For example, the system may generate respective second numerical values using level 2 of the cascaded machine learning system as described above. In particular, for each of a plurality of text spans, the system may: for a segment of the input document containing a text stride, a problem-aware segment vector is determined based on a similarity between a problem token in the problem and a segment token in the segment containing the text stride, and for the problem, a segment-aware problem vector for the problem is also determined based on a similarity between the problem token in the problem and a segment token in the segment containing the text stride, and the first numerical representation of the text stride, the problem-aware segment vector, and the segment-aware problem vector are processed using a second feed-forward neural network to generate a second numerical representation of the text stride.

For each unique text stride in the plurality of text strides, the system determines an aggregate representation of the unique text stride from the second numerical value representation of the text stride corresponding to the unique text stride (step 306), and determines a final score for the unique text stride from the aggregate representation that measures how well the unique text stride answers the question (step 308).

The system selects the unique text span with the highest final score as the answer to the question (step 310).

This specification uses the term "configured" in connection with system and computer program components. By a system of one or more computers configured to perform a particular operation or action, it is meant that the system has installed thereon software, firmware, hardware, or a combination thereof that in operation causes the system to perform the operation or action. For one or more computer programs configured to perform particular operations or actions, it is meant that the one or more programs include instructions, which when executed by a data processing apparatus, cause the apparatus to perform the operations or actions.

Embodiments of the subject matter and the functional operations described in this specification can be implemented in digital electronic circuitry, or in tangibly embodied computer software or firmware, in computer hardware, to include the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Embodiments of the subject matter described in this specification can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions, encoded on tangible, non-volatile storage medium(s) for execution by, or to control the operation of, data processing apparatus. The computer storage medium can be a computer-readable storage device, a computer-readable storage substrate, a random or serial access memory array or device, or a combination of one or more of them. Alternatively or additionally, program instructions may be encoded on an artificially generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode information for transmission to suitable receiver apparatus for execution by a data processing apparatus.

The term "data processing apparatus" refers to data processing hardware and encompasses all types of apparatus, devices, and machines that process data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can also be, or include, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit). In addition to hardware, the apparatus can optionally include code that creates an execution environment for the computer program, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them.

A computer program (also known as or described as a program, software application, app, module, software module, script, or code) can be written in any form of programming language, including compiled or interpreted languages, or declarative or procedural languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A program may, but need not, correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data, such as one or more scripts stored in a markup language document, in a single file dedicated to the program in question, or in multiple coordinated files, such as files that store one or more modules, sub programs, or portions of code. A computer program can be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a data communication network.

In this specification, the term "database" is used broadly to refer to any collection of data: the data need not be structured in any particular way, or at all, and may be stored on storage devices in one or more locations. Thus, for example, an index database may include multiple collections of data, each of which may be organized and accessed differently.

Similarly, in this specification, the term "engine" is used broadly to refer to a software-based system, subsystem, or process that is programmed to perform one or more particular functions. In general, the engine will be implemented as one or more software modules or components installed on one or more computers in one or more locations. In some cases, one or more computers will be dedicated to a particular engine, in other cases, multiple engines may be installed and run on the same computer or computers.

The processes and logic flows described in this specification can be performed by one or more programmable computers executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by, and in special purpose logic circuitry, e.g., an FPGA or an ASIC, or by special purpose logic circuitry and one or more programmed computers.

A computer suitable for executing a computer program may be based on a general purpose or special purpose microprocessor or both, or any other type of central processing unit. Generally, a central processing unit will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a central processing unit that performs or executes instructions and one or more memory devices that store instructions and data. The central processing unit and the memory can be supplemented by, or incorporated in, special purpose logic circuitry. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. However, a computer need not have such a device. Furthermore, a computer may be embedded in another device, e.g., a mobile telephone, a Personal Digital Assistant (PDA), a mobile audio or video player, a game console, a Global Positioning System (GPS) receiver, or a portable storage device (e.g., a Universal Serial Bus (USB) flash drive), to name a few.

Computer-readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices (e.g., EPROM, EEPROM, and flash memory devices), magnetic disks (e.g., an internal hard disk or a removable disk), magneto-optical disks, and CD-ROM and DVD-ROM disks.

To provide for interaction with a user, embodiments of the invention can be implemented on a computer having a display device, e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor, for displaying information to the user and a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer. Other types of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback, such as visual feedback, auditory feedback, or tactile feedback; and input from the user may be received in any form, including acoustic, speech, or tactile input. Further, the computer may interact with the user by sending documents to or receiving documents from a device used by the user, such as by sending web pages to a web browser of the user device in response to requests received from the web browser. In addition, the computer may interact with the user by sending a text message or other form of message to a personal device (e.g., a smartphone that is running a messaging application) and receiving a response message from the user in return.

The data processing apparatus implementing the machine learning model may also include, for example, a dedicated hardware accelerator unit that processes common and computationally intensive portions of the machine learning training or production (i.e., inference) load.

The machine learning model may be implemented and deployed using a machine learning framework, such as a TensorFlow framework, a Microsoft Cognitive Toolkit (Microsoft Cognitive Toolkit) framework, an Apache Singa framework, or an Apache MXNet framework.

Embodiments of the subject matter described in this specification can be implemented in a computing system that includes a back-end component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a front-end component, e.g., a client computer having a graphical user interface, a web browser, or an app through which a user can interact with an implementation of the subject matter described in this specification, or any combination of one or more such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication, e.g., a communication network. Examples of communication networks include a Local Area Network (LAN) and a Wide Area Network (WAN), such as the internet.

A computing system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. In some embodiments, the server transmits data, such as HTML pages, to the client device, for example, for the purpose of displaying the data to the client device and receiving user input from a user interacting with the device acting as a client. Data generated at the user device, such as a result of the user interaction, may be received at the server from the device.

While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any invention or of what may be claimed, but rather as descriptions of features specific to particular embodiments of particular inventions. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in various suitable subcombinations. Furthermore, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.

Similarly, while operations are depicted in the drawings and are recited in the claims in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In some circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated in a single software product or packaged into multiple software products.

Specific embodiments of the subject matter have been described. Other embodiments are within the scope of the following claims. For example, the acts recited in the claims can occur in a different order and still achieve desirable results. As one example, the steps depicted in the accompanying figures are not necessarily required to be in the particular order shown, or in sequential order, to achieve desirable results. In some cases, multitasking and parallel processing may be advantageous.

16页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:电子装置和用于控制该电子装置的方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!