Special neural network training chip

文档序号:1661754 发布日期:2019-12-27 浏览:13次 中文

阅读说明:本技术 专用神经网络训练芯片 (Special neural network training chip ) 是由 托马斯·诺里 奥利弗·特马姆 安德鲁·埃弗里特·菲尔普 诺曼·保罗·约皮 于 2018-05-17 设计创作,主要内容包括:描述了包括用于训练神经网络的专用硬件芯片的方法,系统和装置。专用硬件芯片可以包括标量处理器,被配置为控制专用硬件芯片的计算操作。芯片还包括矢量处理器,被配置为具有矢量处理单元的二维阵列,矢量处理单元全部以单指令多数据方式执行相同的指令,并且通过矢量处理器的加载和存储指令彼此通信。芯片可以另外包括矩阵乘法单元,其耦合到矢量处理器,矩阵乘法单元被配置为将至少一个二维矩阵与另一个一维矢量或二维矩阵相乘以便获得乘法结果。(Methods, systems, and apparatus are described that include a dedicated hardware chip for training a neural network. The dedicated hardware chip may include a scalar processor configured to control computing operations of the dedicated hardware chip. The chip also includes a vector processor configured as a two-dimensional array having vector processing units that all execute the same instructions in a single instruction multiple data manner and communicate with each other through load and store instructions of the vector processor. The chip may additionally include a matrix multiplication unit coupled to the vector processor, the matrix multiplication unit configured to multiply at least one two-dimensional matrix with another one-dimensional vector or two-dimensional matrix to obtain a multiplication result.)

1. A dedicated hardware chip for training a neural network, the dedicated hardware chip comprising:

a scalar processor configured to control computational operations of the dedicated hardware chip;

a vector processor configured as a two-dimensional array having vector processing units that all execute the same instructions in a single instruction, multiple data manner, and communicate with each other through load and store instructions of the vector processor; and

a matrix multiplication unit coupled to the vector processor, the matrix multiplication unit configured to multiply at least one two-dimensional matrix with another one-dimensional vector or two-dimensional matrix in order to obtain a multiplication result.

2. The dedicated hardware chip according to claim 1, further comprising:

a vector memory configured to provide fast dedicated memory to the vector processor.

3. The dedicated hardware chip according to claim 1, further comprising:

a scalar memory configured to provide fast dedicated memory to the scalar processor.

4. The dedicated hardware chip according to claim 1, further comprising:

a transposing unit configured to perform a transposing operation of a matrix.

5. The dedicated hardware chip according to claim 1, further comprising:

a reduction and permutation unit configured to perform reduction of the number of rows and permute the numbers between different channels of the vector array.

6. The dedicated hardware chip according to claim 1, further comprising:

a high bandwidth memory configured to store data of the dedicated hardware chip.

7. The dedicated hardware chip according to claim 1, further comprising a sparse computation core.

8. The dedicated hardware chip according to claim 1, further comprising:

an interface; and

an inter-chip interconnect connecting the interface or resource on the dedicated hardware chip to other dedicated hardware chips or resources.

9. The dedicated hardware chip according to claim 8, further comprising:

a plurality of high bandwidth memories; wherein the inter-chip interconnect connects the interface and the high bandwidth memory to other dedicated hardware chips.

10. The dedicated hardware chip according to claim 8, wherein the interface is a host interface of a host computer.

11. The dedicated hardware chip according to claim 8, wherein the interface is a standard network interface of a network of host computers.

12. The dedicated hardware chip according to claim 8, comprising:

scalar memory (304), vector memory (308), the scalar processor (303), the vector processor (306), and the matrix multiplication unit, wherein the scalar processor performs VLIW instruction fetch/execution cycles and controls the dedicated hardware chip, wherein after fetching and decoding an instruction bundle, the scalar processor itself executes only instructions found in a scalar slot of the bundle using a plurality of multi-bit registers of the scalar processor and scalar memory, wherein a scalar instruction set includes arithmetic operations used in address calculation, load/store instructions, and branch instructions, and wherein remaining instruction slots encode instructions for the vector processor (306) and the matrix multiplication unit.

Background

This specification relates to performing neural network computations in hardware. Neural networks are machine learning models, each model employing one or more model layers to generate outputs, e.g., classifications, for received inputs. In addition to the output layer, some neural networks include one or more hidden layers. The output of each hidden layer serves as the input to the next layer in the network (i.e., the next hidden layer or output layer of the network). Each layer of the network generates an output from the received input in accordance with the current values of the respective set of parameters.

Disclosure of Invention

This specification describes technologies relating to a dedicated hardware chip that is a programmable linear algebraic accelerator optimized for machine learning workloads, particularly for training phases.

In general, one innovative aspect of the subject matter described in this specification can be embodied in dedicated hardware chips.

Other embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods. For a system of one or more computers to be configured to perform a particular operation or action, it is meant that the system has installed thereon software, firmware, hardware or a combination thereof that in operation causes the system to perform the operation or action. For one or more computer programs to be configured to perform particular operations or actions, it is meant that the one or more programs include instructions that, when executed by data processing apparatus, cause the apparatus to perform the operations or actions.

The foregoing and other embodiments may each optionally include one or more of the following features, alone or in combination. In particular, one embodiment includes all of the following features in combination.

A dedicated hardware chip for training a neural network, the dedicated hardware chip comprising: a scalar processor configured to control computational operations of the dedicated hardware chip; a vector processor configured as a two-dimensional array having vector processing units that all execute the same instructions in a single instruction, multiple data manner, and communicate with each other through load and store instructions of the vector processor; and a matrix multiplication unit coupled to the vector processor, the matrix multiplication unit being configured to multiply at least one two-dimensional matrix with another one-dimensional vector or two-dimensional matrix in order to obtain a multiplication result.

A vector memory configured to provide fast dedicated memory to the vector processor. A scalar memory configured to provide fast dedicated memory to the scalar processor. A transposing unit configured to perform a transposing operation of a matrix. A reduction and permutation unit configured to perform reduction of the number of rows and permute the numbers between different channels of the vector array. A high bandwidth memory configured to store data of the dedicated hardware chip. The dedicated hardware chip also includes a sparse compute core.

The special hardware chip further comprises: an interface; and an inter-chip interconnect to connect the interface or resource on the dedicated hardware chip to other dedicated hardware chips or resources.

The dedicated hardware chip also includes high bandwidth memory. An inter-chip interconnect connects the interface and the high bandwidth memory to other dedicated hardware chips. The interface may be a host interface of a host computer. The interface may be a standard network interface of the host computer network.

The subject matter described in this specification can be implemented in particular embodiments to realize one or more of the following advantages. The dedicated hardware chip contains a processor that supports higher dimension tensors (i.e., 2 and higher) in its own right in addition to the traditional 0 and 1 dimensional tensor calculations, while also being optimized for machine learning 32-bit or lower precision calculations.

The details of one or more embodiments of the subject matter of this specification are set forth in the accompanying drawings and the description below. Other features, aspects, and advantages of the subject matter will become apparent from the description, the drawings, and the claims.

Drawings

Fig. 1 shows an example topology of high-speed connections connecting an example combination of dedicated hardware chips, which are connected in a ring topology on a board.

FIG. 2 illustrates a high-level diagram of an example dedicated hardware chip for training a neural network.

FIG. 3 illustrates a high-level example of a compute core.

FIG. 4 shows a more detailed diagram of a chip performing neural network training.

Like reference numbers and designations in the various drawings indicate like elements.

Detailed Description

A neural network having multiple layers may be trained and then used for computational reasoning. Typically, some or all layers of the neural network have parameters that are adjusted during training of the neural network. For example, some or all of the layers may multiply the input of the layer by a parameter matrix, also referred to as a weight, of the layer as part of generating the layer output. The parameter values in the matrix are adjusted during training of the neural network.

In particular, during training, the training system performs a neural network training process to adjust parameter values of the neural network, e.g., to determine trained parameter values from initial values of the parameters. The training system uses error back-propagation, called back-propagation, in conjunction with an optimization method to calculate the gradient of the objective function for each parameter of the neural network, and uses the gradient to adjust the value of the parameter.

The trained neural network may then use forward propagation to compute an inference, i.e., process the inputs through the layers of the neural network to generate neural network outputs for the inputs.

For example, given an input, a neural network may compute an inference of the input. The neural network computes the inference by processing the inputs through each layer of the neural network. In some embodiments, the layers of the neural network are arranged in a sequence.

Thus, to compute inferences from received inputs, the neural networks receive the inputs and process through each neural network layer in the sequence to produce inferences, with the outputs from one neural network layer being provided as inputs to the next neural network layer. The data input of a neural network layer, for example an input of a neural network or an output to a layer below a layer in a sequence of neural network layers, may be referred to as an activation input of the layer.

In some embodiments, the layers of the neural network are arranged in a directed graph. That is, any particular layer may receive multiple inputs, multiple outputs, or both. The layers of the neural network may also be arranged such that the output of a layer may be sent back as input to the previous layer.

An example system is a high performance multi-chip tensor computation system that is optimized for matrix multiplication and other computations for multi-dimensional arrays. These operations are important for training neural networks and, optionally, for using neural networks to compute inferences.

In an example system, multiple specialized chips are arranged to distribute operations so that the system efficiently performs training and inference calculations. In one embodiment, there are four chips on a board, and in larger systems many boards are adjacent to each other in a rack or otherwise in data communication with each other.

FIG. 1 shows an example topology of high-speed connections connecting an example combination of dedicated hardware chips 101a-101d, the dedicated hardware chips 101a-101d being connected in a ring topology on a board. Each chip contains two processors (102 a-h). The topology is a one-dimensional (1D) torus; in a 1D torus, each chip is directly connected to two adjacent chips. As shown, in some embodiments, the chip contains a microprocessor core that has been programmed with software or firmware instructions to operate. In fig. 1, all chips are on a single module 100. The lines between the processors shown in the figure represent high speed data communication links. The processors are advantageously fabricated on one integrated circuit board, but they may also be fabricated on multiple boards. Across chip boundaries, a link is an inter-chip network link; processors on the same chip communicate through intra-chip interface links. The link may be a half-duplex link, where only one processor may transmit data at a time, or a full-duplex link, where data may be transmitted in both directions simultaneously. PARALLEL PROCESSING using this example topology, etc., is described in detail in U.S. patent application No. 62/461,758 entitled "PARALLEL PROCESSING OF recovery AND broadcast PROCESSING OF largedranges OF NON-SCALAR DATA," filed ON 21.2.2017, AND incorporated herein by reference.

FIG. 2 illustrates a high-level diagram of an example dedicated hardware chip for training a neural network. As shown, a single dedicated hardware chip includes two independent processors (202a, 202 b). Each processor (202a, 202b) contains two different cores: (1) compute cores, e.g., Very Long Instruction Word (VLIW) machines, (203a, 203b) and (2) sparse compute cores, i.e., embedded layer accelerators, (205a, 205 b).

Each kernel (203a, b) is optimized for dense linear algebra problems. A single very long instruction word controls multiple compute cores in parallel. The compute core will be described in more detail with reference to fig. 3 and 4.

Example sparse computational kernels (205a, b) map very sparse high dimensional data into dense low dimensional data such that the remaining layers process densely packed input data. For example, the sparse computational core may perform any embedded layer of computations in the neural network being trained.

To perform this sparse to dense mapping, the sparse computation core uses a pre-constructed look-up table, i.e., an embedded table. For example, when there is a series of query terms as user input, each query term is converted to a hash identifier or a one-hot coded vector. Using the identifier as a table index, the embedded table returns the corresponding dense vector, which may be the input activation vector for the next layer. The sparse computation core may also perform a reduction operation across the search query terms to create a dense activation vector. The sparse computational core performs efficient sparse distributed lookup because the embedded tables can be large and not fit into the limited capacity high bandwidth memory of one dedicated hardware chip. More details regarding the functionality of sparse computing cores may be found in U.S. patent application No. 15/016,486 entitled "MATRIX PROCESSING APPARATUS" filed on 5.2.2016,

and is hereby incorporated by reference.

Fig. 3 shows a high-level example of a compute core (300). The computational core may be a machine, i.e. a VLIW machine, which controls several computational units in parallel. Each compute core (300) includes: scalar memory (304), vector memory (308), scalar processor (303), vector processor (306), and extended vector unit (i.e., matrix multiplication unit (MXU) (313), transpose unit (XU) (314), and Reduction and Permutation Unit (RPU) (316)).

The example scalar processor performs VLIW instruction fetch/execution cycles and controls the compute cores. After fetching and decoding an instruction bundle, the scalar processor itself executes instructions found in the scalar slots of the bundle using only a plurality of multi-bit registers, i.e., 32-bit registers, of the scalar processor (303) and scalar memory (304). The scalar instruction set includes, for example, normal arithmetic operations for address calculation, load/store instructions, and branch instructions. The remaining instruction slots encode instructions for a vector processor (306) or other extended vector unit (313,314,316). The decoded vector instructions are forwarded to the vector processor (306).

Scalar processor (303) may forward up to three scalar register values along with vector instructions to other processors and units for operation. Scalar processors may also retrieve the computation results directly from the vector processor. However, in some embodiments, the example chip has low bandwidth from a vector processor to a scalar processor

A communication path.

The vector instruction scheduler is located between the scalar processor and the vector processor. The scheduler receives decoded instructions from the non-scalar VLIW slots and broadcasts the instructions to the vector processors (306). The vector processor (306) consists of a two-dimensional array, i.e., a 128x 8 array, of vector processing units that execute the same instructions in a Single Instruction Multiple Data (SIMD) fashion. The vector processing unit is described in detail with reference to fig. 4.

The example scalar processor (303) accesses a small, fast private scalar memory (304) backed up by a larger but slower High Bandwidth Memory (HBM) (310). Similarly, the example vector processor (306) accesses a small, fast private vector memory (306), which is also backed up by the HBM (310). Word-granularity access occurs between a scalar processor (303) and a scalar memory (304) or between a vector processor (306) and a vector memory (308). The granularity of loads and stores between the vector processor and the vector memory is 128 vectors of 32-bit words. Direct memory accesses occur between scalar memory (304) and HBM (310) and between vector memory (306) and HBM (310). In some embodiments, the transfer of storage from the HBM (310) to the processor (303,306) may be accomplished through only scalar or vector memory. In addition, there may not be a direct memory transfer between scalar memory and vector memory.

The instruction may specify an extended vector unit operation. With each vector unit instruction executed, there is a two-dimensional, i.e. 128 by 8, vector unit, each of which may send one register value as an input operand to the extended vector unit. Each extended vector unit takes input operands, performs a corresponding operation, and returns the result to the vector processor (306). The extension vector unit will be described below with reference to fig. 4.

FIG. 4 shows a more detailed diagram of a chip performing neural network training. As shown and described above, the chip contains two compute cores (480a, 480b) and two sparse compute cores (452a, 452 b).

The chip has a shared area that includes an interface to a host computer (450) or multiple host computers. The interface may be a host interface of the host computer or may be a standard network interface of the host computer network. The shared region may also have high bandwidth memory stacks (456a-456d) along the bottom, as well as inter-chip interconnects (448) that connect the interface and memory together as well as data from other chips. The interconnect may also connect the interface to computing resources on the hardware chip. A plurality of stacks, two high bandwidth memories (456a-b, 456c-d), are associated with each compute core (480a, 480 b).

The chip stores data in high bandwidth memory (456c-d), reads data into and out of vector memory (446), and processes the data. The compute core (480b) itself includes a vector memory (446), which is an on-chip S-RAM divided into two dimensions. The vector memory has an address space in which addresses accommodate floating point numbers, i.e., 128 numbers of 32 bits each. The computation core (480b) further includes a computation unit to compute a value and a scalar unit to control the computation unit. The calculation unit may include a vector processor, and the scalar unit may include a scalar processor. The computing core, which may form part of a dedicated chip, may also include a matrix multiplication unit or another extended operation unit, such as a transpose unit (422) that performs transpose operations of matrices (i.e., 128x 128 matrices), and a reduction and permutation unit.

The vector processor (306) consists of a two-dimensional array of vector processing units (i.e., 128x 8) that all execute the same instructions in a Single Instruction Multiple Data (SIMD) fashion. The vector processor has channels and sub-channels, i.e., 128 channels and 8 sub-channels. Within a lane, vector units communicate with each other through load and store instructions. Each vector unit can access one 4-byte value at a time. Vector units that do not belong to the same channel cannot communicate directly. These vector units must use the reduction/permutation units described below.

The computation unit includes vector registers, i.e. 32 registers, in the vector processing unit (440), which may be used for floating point and integer operations. The compute unit includes two Arithmetic Logic Units (ALUs) (406c-d) to perform the computations. One ALU (406c) performs floating-point addition and the other ALU (406d) performs floating-point multiplication. Both ALUs (406c-d) may perform various other operations, such as shifting, masking, and comparing. For example, the compute core (480b) may want the vector register V1Plus second vector register V2And put the result into a third vector register V3In (1). To compute the addition, the compute core (480b) performs multiple operations in one clock cycle. Using these registers as operands, each vector unit can be clocked at each clockTwo ALU instructions, a load and a store instruction, are executed concurrently in a cycle. The base address of a load or store instruction may be computed in a scalar processor and forwarded to a vector processor. Each vector unit in each subchannel may compute its own offset address using various methods such as strides and dedicated index address registers.

The compute unit also contains an Extended Unary Pipeline (EUP) (416) that performs operations such as square root and reciprocal. The compute core (480b) requires three clock cycles to perform these operations because they are more computationally complex. Since EUP processing requires more than one clock cycle, a first-in-first-out data memory is used to store the results. After the operation is completed, the result will be stored in the FIFO. The compute core may later use a separate instruction to pull data from the FIFO and place the data into the vector registers. The random number generator (420) allows the computational core (480b) to generate a plurality of random numbers per cycle, i.e., 128 random numbers per cycle.

As described above, each processor, which may be implemented as part of a dedicated hardware chip, has three extended operating units: a matrix multiplication unit (448) for performing matrix multiplication; a transpose unit (422) that performs the transpose operation of the matrix (i.e., a 128x 128 matrix), and a reduction and permutation unit (424, 426 are shown as separate units in fig. 4).

The matrix multiplication unit performs matrix multiplication between two matrices. The matrix multiplication unit (438) receives the data because the compute core needs to load a set of numbers of the matrix to be multiplied. As shown, the data comes from a vector register (440). Each vector register contains 128x 8 digits, i.e. a 32-bit number. However, floating point conversion may occur when data is sent to the matrix multiplication unit (448) to change the number to a smaller bit size, i.e., from 32 bits to 16 bits. The serializer (440) ensures when a number is read out of the vector register, the two-dimensional array, i.e. the 128x 8 matrix, is read as a set of 128 numbers, which is sent to the matrix multiplication unit (448) for each of the next eight clock cycles. After the matrix multiplication completes the computation, the results are deserialized (442a, b), which means that the result matrix is held for several clock cycles. For example, for a 128x 8 array, 128 numbers are held for each of 8 clock cycles, and then the 128 numbers are pushed into the FIFO so that a two-dimensional array of 128x 8 numbers can be grabbed and stored in the vector in-vector register (440) in one clock cycle.

At one segment of the cycle, 128, the weights are shifted into the matrix multiplication unit (448) as the number to be multiplied by the matrix. Once the matrix and weights have been loaded, the compute core (480) may send a set of numbers (i.e., 128x 8) to the matrix multiplication unit (448). Each row in the set may be multiplied by a matrix to produce several (i.e., 128) results per clock cycle. When the compute core performs matrix multiplication, the compute core also shifts the new set of numbers in the background to the next matrix that the compute core will multiply, so that the next matrix is available when the computation process of the previous matrix is completed. The MATRIX multiplication UNIT (448) is described in more detail in application Nos. 16113-8251001 entitled "LOW MATRIX MULTIPLY UNIT COMPOSITED OF MULTIPI-BIT CELLS" and 16113-8252001 entitled "MATRIX MULTIPLY UNIT WITH NUMERICAL OPTIMIZED FOR NEURAL NETWORKAPPLICATIONS", both OF which are incorporated herein by reference.

The transpose unit transposes a matrix. A transpose unit (422) receives the numbers and transposes them such that numbers across channels are transposed with numbers in another dimension. In some embodiments, the vector processor includes a 128x 8 vector unit. Thus, in order to transpose a 128x 128 matrix, full matrix transpose requires 16 separate transpose instructions. After transposing is complete, the transpose matrix will be available. However, an explicit instruction is required to move the transpose matrix into the vector register file.

The reduction/permutation units (or units 424,426) solve the problem of cross-lane communication by supporting various operations such as permutation, lane rotation, rotation permutation, lane reduction, permuted lane reduction and segmented permuted lane reduction. As shown, these computations are separate, however, the compute cores may use one or the other or one link to another. The reduction unit (424) adds all the digits in each row of digits and feeds the digits to a permutation unit (426). The permuting unit moves data between different channels. The transpose unit, the reduction unit, the permutation unit and the matrix multiplication unit each require more than one clock cycle to complete. Thus, each element has an associated FIFO, so that the results of the computations can be pushed to the FIFO, and separate instructions can be executed at a later time to pull data from the FIFO and into the vector registers. By using a FIFO, the compute core does not need to retain multiple vector registers during lengthy operations. As shown, each unit fetches data from a vector register (440).

The compute core uses scalar units to control the compute units. Scalar units have two main functions: (1) carrying out cycle counting and addressing; (2) direct Memory Address (DMA) requests are generated to move data between the high bandwidth memory (456c-d) and the vector memory (446) in the background by the DMA controller, and then to the inter-chip connections (448), to other chips in the example system. The scalar unit includes instruction memory (404), instruction decode and issue (402), a scalar processing unit (408) including scalar registers (i.e., 32-bits), scalar memory (410), and two ALUs (406a, b) for performing two operations per clock cycle. Scalar units may feed operands and immediate values to vector operations. Each instruction may be issued from instruction decode and issue (402) as an instruction bundle containing instructions executed on a vector register (440). Each instruction bundle is a Very Long Instruction Word (VLIW), each instruction being several bits wide, divided into a number of instruction fields.

The chip 400 may be used to perform at least a portion of the training of the neural network. In particular, when training a neural network, the system receives labeled training data from a host computer using a host interface (450). The host interface may also receive instructions including parameters calculated by the neural network. The parameters may include at least one or more of: how many layers should be processed, the corresponding set of weight inputs for each layer, the initial set of activation inputs, i.e., the training data, which is the input to the neural network from which the inference or training is calculated, the size of the corresponding inputs and outputs for each layer, the step size calculated by the neural network, and the type of layer to be processed, e.g., convolutional layer or fully-connected layer.

The set of weight inputs and the set of activation inputs may be sent to a matrix multiplication unit of the computational core. There may be other calculations performed on the inputs by other components in the system before the weight inputs and activation inputs are sent to the matrix multiplication unit. In some embodiments, there are two ways to send activation from a sparse compute core to a compute core. First, the sparse computation core may send communications through high bandwidth memory. For large amounts of data, the sparse compute core may store the activations in high bandwidth memory using Direct Memory Address (DMA) instructions that update the target synchronization marker in the compute core. The compute kernel may wait for this synchronization flag using a synchronization instruction. Once the synchronization flag is set, the compute core uses a DMA instruction to copy the activation from the high bandwidth memory to the corresponding vector memory.

Second, the sparse compute core may send communications directly to the compute core vector memory. If the amount of data is not large (i.e., suitable for compute core vector memory), the sparse compute core may store the activation directly in the compute core's vector memory using a DMA instruction while the compute core is notified of the synchronization marker. The compute core may wait for this synchronization flag before performing an activation-dependent computation.

The matrix multiplication unit may process the weight inputs and the activation inputs and provide a vector or matrix of outputs to the vector processing unit. The vector processing unit may store a vector or matrix of processed outputs. For example, the vector processing unit may apply a non-linear function to the output of the matrix multiplication unit to generate the activation values. In some embodiments, the vector processing unit generates a normalized value, a combined value, or both. The vector of processed outputs may be used as activation inputs for a matrix multiplication unit for subsequent layers in the neural network.

Once a vector of processed outputs of a batch of training data is calculated, the outputs may be compared to expected outputs of labeled training data to determine an error. The system may then perform back propagation to propagate errors through the neural network in order to train the network. The gradient of the loss function is calculated using an arithmetic logic unit of the on-chip vector processing unit.

The example system requires activation of a gradient for back propagation through the neural network. To send activation gradients from a compute core to a sparse compute core, an example system may use compute core DMA instructions to store activation gradients in high bandwidth memory while notifying a target sparse compute core with a synchronization flag. The sparse computation core may wait for the synchronization marker before performing a computation that depends on the activation gradient.

The matrix multiplication unit performs two matrix multiplication operations for back propagation. A matrix multiplication applies back-propagating errors from a previous layer in the network to the weights along a reverse path through the network to adjust the weights and determine new weights for the neural network. The second matrix multiplication applies the error to the original activation as feedback to the previous layer in the neural network. The original activation is generated during forward pass and may be stored for use during reverse pass. For the calculation, general instructions in the vector processing unit may be used, including floating point addition, subtraction and multiplication. The general purpose instructions may also include compare, shift, mask, and logic operations. While matrix multiplication can be accelerated particularly well, the arithmetic logic unit of the vector processing unit performs general purpose computations at a rate of 128 × 8 × 2 operations per core per cycle.

Embodiments of the subject matter and the functional operations described in this specification can be implemented in digital electronic circuitry, in tangibly embodied computer software or firmware, in computer hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Embodiments of the subject matter described in this specification can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions, encoded on a tangible, non-transitory program carrier, for execution by, or to control the operation of, data processing apparatus. The computer storage medium can be a machine-readable storage device, a machine-readable storage substrate, a random or serial access storage device, or a combination of one or more of them. Alternatively, or in addition, the program instructions can be encoded on an artificially generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode information for transmission to suitable receiver apparatus for execution by data processing apparatus.

The term "data processing apparatus" refers to data processing hardware and includes all types of apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can also, or further, comprise special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit). The apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them.

A computer program, also known as a program, software application, script, or code, can be written in any form of programming language, including compiled or interpreted languages, declarative or procedural languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, object, or other unit suitable for use in a computing environment. A computer program may, but need not, correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data, e.g., one or more scripts stored in a markup language document, in a single file dedicated to the program in question, or in multiple coordinated files, e.g., files that store one or more modules, sub programs, or portions of code. A computer program can be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a communication network.

The processes and logic flows described in this specification can be performed by one or more programmable computers executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array), an ASIC (application-specific integrated circuit), or a GPGPU (general purpose graphics processing unit).

Computers suitable for executing computer programs include, for example, central processing units that can be based on general or special purpose microprocessors or both, or any other type of central processing unit. Generally, a central processing unit will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a central processing unit for executing or performing instructions and one or more memory devices for storing instructions and data. The central processing unit and the memory can be supplemented by, or incorporated in, special purpose logic circuitry. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. However, a computer need not have such a device. Furthermore, the computer may be embedded in another device, e.g., a mobile telephone, a Personal Digital Assistant (PDA), a mobile audio or video player, a game console, a Global Positioning System (GPS) receiver, or a portable storage device, e.g., a Universal Serial Bus (USB) flash drive, to name a few.

Computer readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, such as internal hard disks or removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks.

To provide for interaction with a user, embodiments of the subject matter described in this specification can be implemented on a computer having a display device, e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor, for displaying information to the user and a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer. Other types of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback, such as visual feedback, auditory feedback, or tactile feedback; and input from the user may be received in any form, including acoustic, speech, or tactile input. In addition, the computer may interact with the user by sending and receiving documents to and from the device used by the user; for example, by sending a web page to a web browser on the user device of the user in response to a request received from the web browser. Also, the computer may interact with the user by sending a text message or other form of message to a personal device (e.g., a smartphone running a messaging application) and receiving a response message from the user.

Embodiments of the subject matter described in this specification can be implemented in a computing system that includes a back-end component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a front-end component, e.g., a user computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the subject matter described in this specification, or any combination of one or more such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication, e.g., a communication network. Examples of communication networks include a local area network ("LAN") and a wide area network ("WAN"), e.g., the Internet.

The computing system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. In some embodiments, the server sends data, e.g., an HTML page, to the user device, e.g., for the purpose of displaying data to and receiving user input from a user interacting with the device acting as a client. Data generated at the user device, e.g., a result of the user interaction, may be received at the server from the device.

15页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:在涉及欠佳网络条件的情形下提供内容项的系统和方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!