Programmable multiply-add array hardware

文档序号:1205465 发布日期:2020-09-01 浏览:16次 中文

阅读说明:本技术 可编程乘加阵列硬件 (Programmable multiply-add array hardware ) 是由 韩亮 蒋晓维 于 2018-12-21 设计创作,主要内容包括:一种包括数据架构的集成电路,所述数据架构包括被配置为接收操作数的N个加法器和N个乘法器。所述数据架构接收用于选择所述数据架构的所述N个乘法器和所述N个加法器之间的数据流的指令。所选择的数据流包括以下选项:(1)使用所述N个乘法器和所述N个加法器的第一数据流,用于提供乘法累加模式;以及(2)第二数据流,用于提供乘法归约模式。(An integrated circuit comprising a data architecture including N adders and N multipliers configured to receive operands. The data architecture receives instructions for selecting data flows between the N multipliers and the N adders of the data architecture. The selected data stream includes the following options: (1) using a first data stream of the N multipliers and the N adders for providing a multiply-accumulate mode; and (2) a second data stream for providing a multiplicative reduction mode.)

1. A method for specifying a function to be performed on a data architecture comprising N adders and N multipliers configured to receive operands, the method comprising:

receiving an instruction for the data architecture to operate in one of a multiply-reduce mode or a multiply-accumulate mode; and

based on the instructions, data flows between the N multipliers and at least some of the N adders of the data architecture are selected.

2. The method of claim 1, wherein selecting the data stream comprises: in response to receiving an instruction corresponding to the multiplicative reduction mode, a first data stream using the N multipliers and N-1 adders is selected, wherein one of the N adders is not used.

3. The method of claim 2, wherein the first data stream comprises the N-1 adders receiving inputs from the N multipliers.

4. The method of claim 1, wherein selecting the data stream comprises: selecting a second data stream using the N multipliers and the N adders in response to receiving an instruction corresponding to the multiply-accumulate mode.

5. The method of claim 4, wherein the second data stream includes each of the N adders receiving input operands from a respective one of the N multipliers.

6. An integrated circuit, the integrated circuit comprising:

a data architecture comprising N adders and N multipliers configured to receive operands, wherein the data architecture receives instructions to select a data flow between the N multipliers and at least some of the N adders of the data architecture, the selected data flow comprising the following options:

using a first data stream of the N multipliers and the N adders for providing a multiply-accumulate mode; and

a second data stream for providing a multiplicative reduction mode.

7. The integrated circuit of claim 6, wherein the first data stream receives input operands from respective ones of the N multipliers using each of the N adders.

8. The integrated circuit of claim 6, wherein the second data stream uses the N multipliers and N-1 adders, wherein one of the N adders is not used.

9. The integrated circuit of claim 8, wherein the second data stream receives inputs derived from the N multipliers using the N-1 adders.

10. A non-transitory computer-readable storage medium storing a set of instructions executable by at least one processor of a device to cause the device to perform a method for specifying a function to be performed on a data architecture comprising N adders and N multipliers configured to receive operands, the method comprising:

receiving an instruction for the data architecture to operate in one of a multiply-reduce mode or a multiply-accumulate mode; and

based on the instructions, data flows between the N multipliers and at least some of the N adders of the data architecture are selected.

11. The non-transitory computer-readable storage medium of claim 10, wherein selecting the data stream comprises: in response to receiving an instruction corresponding to the multiplicative reduction mode, a first data stream using the N multipliers and N-1 adders is selected, wherein one of the N adders is not used.

12. The non-transitory computer-readable storage medium of claim 11, wherein the first data stream includes the N-1 adders receiving inputs derived from the N multipliers.

13. The non-transitory computer-readable storage medium of claim 10, wherein selecting the data stream comprises: selecting a second data stream using the N multipliers and the N adders in response to receiving an instruction corresponding to the multiply-accumulate mode.

14. The non-transitory computer-readable storage medium of claim 13, wherein the second data stream includes each of the N adders receiving input operands from a respective one of the N multipliers.

Background

As neural network-based deep learning applications grow exponentially in various business sectors, commodity central processing unit/graphics processing unit (CPU/GPU) based platforms are no longer a suitable computing basis to support the ever-increasing computing demands in terms of performance, power efficiency, and economic scalability. The development of neural network processors to accelerate neural network-based deep learning applications has received significant attention in many business areas, including mature chip manufacturers, pioneer companies, and large internet companies. Single Instruction Multiple Data (SIMD) architectures can be applied to chips to accelerate the computation of deep learning applications.

Neural network algorithms typically require large matrix multiply accumulate operations. Therefore, acceleration hardware typically requires massive parallel multiply-accumulate structures to speed up acceleration. However, the area and power cost requirements of such structures must be controlled to optimize the computational speed of the hardware and to reduce the size of the chip count to save power consumption.

Disclosure of Invention

Embodiments of the present disclosure provide an architecture for a software programmable connection between a multiplier array and an adder array to enable reuse of (reuse) adders to perform multiply-accumulate (multiply-accumulate) or multiply-reduce (multiply-reduce). This architecture is more area and power efficient than conventional solutions, which is important for neural network processing units where a large number of data channels are implemented.

Embodiments of the present disclosure provide a method for specifying a function to be performed on a data architecture including N adders and N multipliers configured to receive operands. The method comprises the following steps: receiving an instruction for the data architecture to operate in one of a multiply-reduce mode or a multiply-accumulate mode; and selecting, based on the instruction, data flows between the N multipliers and at least some of the N adders of the data architecture.

Further, embodiments of the present disclosure include an integrated circuit. The integrated circuit includes a data architecture including N adders and N multipliers configured to receive operands. The data architecture receives instructions for selecting data flows between the N multipliers and the N adders of the data architecture. The selected data stream includes the following options: (1) using a first data stream of the N multipliers and the N adders for providing a multiply-accumulate mode; and (2) a second data stream for providing a multiplicative reduction mode.

Furthermore, embodiments of the present disclosure include a non-transitory computer-readable storage medium storing a set of instructions executable by at least one processor of a device to cause the device to perform the above-described method.

Drawings

Fig. 1 illustrates an exemplary neural network processing unit chip architecture consistent with embodiments of the present disclosure.

Fig. 2 shows an exemplary architecture of a multiply-add array with 4 parallel channels.

FIG. 3 illustrates an exemplary architecture of a Multiply Accumulator (MAC) unit design.

Fig. 4 shows an exemplary architecture of a parallel multiplier followed by a reduction adder tree.

Fig. 5 illustrates an exemplary architecture of an algorithm that maps accumulation capabilities that are typically required in a single data channel and across channels.

Fig. 6A and 6B illustrate exemplary architectures of multiply-add arrays consistent with embodiments of the present disclosure.

FIG. 7 illustrates an exemplary method for specifying a function to be performed on a data architecture consistent with embodiments of the present disclosure.

Detailed Description

Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. The following description refers to the accompanying drawings, in which like numerals in different drawings represent the same or similar elements, unless otherwise specified. The implementations set forth in the following description of exemplary embodiments do not represent all implementations consistent with the invention. Rather, they are merely examples of apparatus and methods consistent with the relevant aspects of the invention as set forth in the claims below.

Embodiments of the present disclosure may be implemented in a neural Network Processing Unit (NPU) architecture, such as the exemplary NPU architecture 100 shown in fig. 1, to accelerate deep learning algorithms.

Fig. 1 illustrates an exemplary architecture 100 according to an embodiment of the present disclosure. As shown in FIG. 1, the architecture 100 may include an on-chip communication system 102, an off-chip memory 104, a memory controller 106, a Direct Memory Access (DMA) unit 108, a Joint Test Action Group (JTAG)/Test Access Port (TAP) controller 110, a bus 112, a peripheral interface 114, and the like. It should be appreciated that the on-chip communication system 102 may perform arithmetic operations based on the communicated data packets.

The chip communication system 102 may include a global manager 105 and a plurality of tiles (tiles) 1024. Global manager 105 may include at least one cluster manager to coordinate with tiles 1024. For example, each cluster manager may be associated with a tile array that provides synapses/neuron circuits of a neural network. For example, the top layer of the tile of fig. 1 may provide circuitry representing the input layer of the neural network, while the second layer of the tile may provide circuitry representing the hidden layer of the neural network. As shown in FIG. 1, global manager 105 may include two cluster managers to coordinate with two tile arrays. Tile 1024 may include a SIMD architecture including one or more multipliers, adders, multiply accumulators, and be configured to perform one or more operations (e.g., arithmetic calculations) on transmitted data packets under control of global manager 105. To perform operations on transmitted data packets, tile 1024 may include at least one core to process the data packets and at least one buffer to store the data packets.

The off-chip memory 104 may include Read Only Memory (ROM), Erasable Programmable Read Only Memory (EPROM), and the like. The off-chip memory 104 may be configured to store large amounts of data at a slower access speed compared to on-chip memory integrated with one or more processors.

The memory controller 106 may read, write, or refresh one or more memory devices. The memory device may include on-chip memory and off-chip memory. For example, the memory device may be implemented as any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, or a magnetic or optical disk.

The DMA unit 108 may generate a memory address and initiate a memory read or write cycle. The DMA unit 108 may contain several hardware registers that can be written to and read from by one or more processors. The registers may include a memory address register, a byte count register, and one or more control registers. These registers may specify some combination of source, destination, transfer direction (read from or write to an input/output (I/O) device), size of transfer unit, and/or number of bytes transferred in a burst.

JTAG/TAP controller 110 may specify a dedicated debug port that implements a serial communication interface (e.g., JTAG interface) for low overhead access without requiring direct external access to the system address and data buses. JTAG/TAP controller 110 may also specify an on-chip test access interface (e.g., a TAP interface) that implements a protocol to access a set of test registers that provide various portions of the chip logic level and device capabilities.

The bus 112 may include an on-chip bus and an inter-chip bus. The on-chip buses may connect all internal components of the NPU architecture 100, such as the on-chip communication system 102, the off-chip memory 104, the memory controller 106, the DMA unit 108, the JTAG/TAP controller 110, and the PCIe interface 114 to each other.

The peripheral interface 114 (e.g., a PCIe interface) may support full-duplex communications between any two endpoints without inherent limitations on concurrent access across multiple endpoints.

In a computer with a Single Instruction Multiple Data (SIMD) architecture, multiple processing units, Arithmetic Logic Units (ALUs) or small CPUs in parallel each compute simultaneously using their own data-typically 2 or 3 input operands and 1 output result. For example, multiply-add arrays are common in SIMD architectures, where each data lane may have a private multiplier and adder. Private multipliers and adders enable parallel processing of different data streams. FIG. 2 shows an exemplary architecture of a multiply-add array with 4 parallel lanes, where the array includes four multipliers M1-M4 and four adders A1-A4. It should be noted that the figures in this disclosure will be shown with 4-way SIMD, but the 4-way SIMD concept can be extended to be narrower or wider than 4-lane.

As shown in FIG. 2, two operands are input to each multiplier, M1-M4. For simplicity, the operands op1 and op2 are input into the multiplier M1, and the multiplier M1 generates the result R1. The result R1 of the multiplication of operands op1 and op2 is input to adder a1, adder a1 also receiving operand op3 as input to generate result R2. The result R2 of the addition of operand op3 and result R1 may continue for further processing (not shown). The above operations can be summarized as follows:

R2=[(op1*op2)=R1]+op3。

simultaneously with the above operations, the other operands are input to the other multipliers shown in fig. 2, and the result of each of the other multipliers is input to the other adders along with the other operands, the results of which continue for further processing (not shown).

Some optimization designs may merge multipliers and adders into one Multiply Accumulator (MAC) unit to save area. FIG. 3 illustrates an exemplary architecture of a MAC unit design that includes four MAC units Mcl-Mc 4. As shown in fig. 3, three operations are input to each MAC unit. For simplicity, operands op1, op2, and op3 are shown in FIG. 3, where operand opl is multiplied by operand op2, with the result added to operand op 3. The results (e.g., R3) continue for further processing (not shown).

Simultaneously with the above operations, other operands are input to the other MAC units shown in fig. 3, and the results of each of the other MAC units continue for further processing (not shown). The operation of FIG. 3 is similar to that of FIG. 2, except that there is only one layer of components, namely MAC units Mcl-Mc4, instead of the two-layer arrangement shown in FIG. 2 where the first layer includes multipliers M1-M4 and the second layer includes adders A1-A4.

It should be noted, however, that the implementations shown in fig. 2 and 3 can only process data within private channels in parallel. I.e. no cross-channel data processing capability. Furthermore, in some neural networks, large matrix multiply-add operations are very common. There is a need to map such operations into large, but not so wide, parallel hardware in an efficient manner. Thus, accumulation operations across multiple SIMD lanes can become important to performance. To achieve faster reduction-add-accumulate operations from different SIMD lanes, an adder tree is typically introduced after the multiplier array.

Fig. 4 shows an exemplary architecture of a parallel multiplier followed by a reduction adder tree. In operation, a pair of operands is input to each multiplier, e.g., M1-M4. For simplicity, operands opl and op2 are shown as inputs to multiplier M1, and operands opl and op2 are shown as inputs to multiplier M2, although it will be readily understood that other operand pairs may be input to other multipliers M3-M4 at the same time. Furthermore, even though the operands op1 and op2 are shown as inputs to the multipliers M1 and M2, the operands op1 and op2 may be different from each other (and extended to operands input to other multipliers M3-M4). The differences may be in their identity and the type and kind of data flowing through the input.

The result R4 of multiplying the operands op1 and op2 is added to the result R5 of multiplying the operands opl and op2 at adder A1 to generate the result R6. The result R6 is added to the result R7 (from adder a2) at adder A3 to generate the result R8. The result R8 continues for further processing (not shown).

Adders A1-A3 form a reduction adder tree, and the adders in the tree are one less than the architecture shown in FIG. 2, but increased by the number of levels from the level shown in FIG. 2, i.e., level 1 including M1-M4, level 2 including A1-A2, and level 3 including A3, with 2 levels in FIG. 2, i.e., level 1 including M1-M4 and level 2 including A1-A4. Although the levels in FIG. 4 increase, the architecture produces a single result (e.g., result R8) using multiple multipliers and adder trees, whereas the architecture shown in FIG. 2 produces four separate or parallel results.

Indeed, the mapping architectures shown in fig. 2, 3 and 4 typically require accumulation capabilities in a single data channel and across channels. For example, fig. 5 shows an example of an architecture that provides a parallel MAC layer followed by a reduction adder tree. In operation, the operands of the triples are input to each MAC unit in layer 1. The results from each MAC unit are input to a reduction adder tree. For example, operands op1, op2, and op3 are input into the MAC unit Mcl to generate a result R9. The operands of the other triples are input to each of the MAC units Mc2-Mc4, generating results R10, R11, and R12, respectively. The result R9 (from MAC unit Mcl) is input to adder A1 along with result R10 (from MAC unit Mc2) to generate R13. Similarly, result R11 (from MAC unit Mc3) is input to adder a2 along with result R12 (from MAC unit Mc4) to generate R14. Adder A3 receives results R13 and R14 as operands to generate result R15, which results R15 continue for further processing (not shown).

Embodiments of the present disclosure provide programmable multiply-add array hardware. For example, embodiments describe the ability to select a data stream between a multiplier array and an adder array to enable reuse of adders to perform multiply-accumulate or multiply-reduce additions. Thus, the architecture provides higher area and power efficiency than alternative solutions.

Further, while embodiments are directed to neural network processing units, it should be understood that the embodiments described herein may be implemented by any SIMD architecture hardware with cross-channel data processing capabilities, particularly accelerators for deep learning. This includes SIMD architecture hardware dedicated to neural network processing units and FPGAs, and upgraded GPUs and DSPs that extend into the deep learning market.

Fig. 6A and 6B illustrate an exemplary architecture of a multiply-add array 600 that is programmable to perform multiply-accumulate and multiply-reduce modes consistent with embodiments of the present disclosure. As shown in FIGS. 6A and 6B, adders A1-A4 are reused in the multiply-reduce mode (FIG. 6A) and multiply-accumulate mode (FIG. 6B).

In operation and as shown in FIG. 6A, adder A4 is disconnected from the data stream, while adders A1-A3 are connected to perform the multiplicative reduction operation. In the illustration, the multiplier M1 accepts two operands op1 and op2 to generate an output operand to the adder Al. Multipliers M2, M3, and M4 are similarly equipped to operate the same as M1 and provide output operands to their respective adders. For example, multipliers M1 and M2 provide output operands to adder a1, while multipliers M3 and M4 provide output operands to adder a 2. Adders A1 and A2 may add their input operands and provide output operands to adder A3.

To perform parallel multiply-accumulate operations, the data stream includes each adder A1-A4 connected to a corresponding multiplier, as shown in FIG. 6B. The multiplier M1 accepts two operands opl and op2 to generate a result operand R16. The result operand R16 and operand op3 are provided as operands to adder A1. The result from a1 may proceed to another array (not shown). Similarly, the multipliers M2-M4 accept a set of operands, and the result operands from each of M2-M4 are input to adders A3, A2, and A4, respectively, as first operands of A3, A2, and A4, respectively. Each of A2-A4 accepts a second operand, and the result operands may proceed to other arrays (not shown).

The disclosed embodiments provide software controllable data flow between the multiplier array and the adder array to execute in either mode. One way to instruct the hardware to select the data flow through the multipliers and adders is through a compiler for generating different instructions for different desired operations. For example, to execute D OPl by OP2+ OP3, the compiler may generate the following instructions:

r0=LOAD Mem[&OP1];

r1=LOAD Mem[&OP2];

r2=LOAD Mem[&OP3];

r3=MUL r0,r1;

r3=ADD r3,r2;

STORE Mem[&D],r3.

the compiled code may include information for controlling multiplexers and registers to navigate the data flow for each mode. An array of multipliers, an array of adders, multiplexers, and registers may be incorporated into each tile (e.g., tile 1024 of architecture 100 of fig. 1). Each tile may receive instructions from the cluster manager specifying functions to be performed on the SIMD architecture within tile 1024 (in some cases cycle by cycle). Depending on the instructions received from the cluster manager, the SIMD architectures across the various tiles may be independent of each other to operate in the same mode of operation or in different modes of operation.

Upon receiving an instruction from the cluster manager, the core of the tile may issue an operation mode instruction into the tile's instruction buffer to specify a function to be performed on the SIMD architecture. These specified functions may result in a data stream corresponding to a multiply-reduce mode (as shown in fig. 6A) or a data stream corresponding to a multiply-accumulate mode (as shown in fig. 6B).

As shown in fig. 6A and 6B, for SIMD architectures, the present disclosure uses N multipliers and N adders on both the multiply-reduce and multiply-accumulate modes. It should be understood that the SIMD architecture may be an N-way SIMD architecture with N multipliers and N adders, or may include adders and multipliers in addition to the N multipliers and the N adders (e.g., these other adders and multipliers may be inactive). Those skilled in the art will appreciate that the embodiments provided in this disclosure are more area and power efficient. This efficiency is important for neural network processing units that may implement thousands of data channels. Compared to the implementation of fig. 2, which is more area and power efficient for the embodiments of the present disclosure, the implementation of fig. 2 would require N multipliers and 2N-1 adders, with at least two more layers of adders on top of the adders a1-a 4. That is, the outputs of adders A1 and A2 would be input as operands to the fifth adder, while the outputs of adders A3 and A4 would be input as operands to the sixth adder. The outputs of the fifth and sixth adders will be input as operands to the seventh adder. Moreover, the present disclosure is more area and power efficient than the implementation of FIG. 4, which would require N MACs (fused multipliers and adders) and N-1 adders.

Fig. 7 illustrates an exemplary method 700 for specifying a function to be performed on a data architecture consistent with embodiments of the present disclosure. The method may be performed by, for example, an NPU architecture (e.g., NPU architecture 100 shown in fig. 1). For example, a component of the NPU architecture (e.g., global manager 105, cluster manager, tile 1024, or any combination thereof) may assist in performing method 700.

After an initial start step 705, at step 710, a SIMD architecture with N multipliers and N adders is provided. The N multipliers are configured to receive two incoming operands and the N adders are configured to provide operations on the two incoming operands.

At step 715, the SIMD architecture receives an instruction corresponding to either the multiply reduce mode or the multiply accumulate mode. For example, as described above, the instructions may specify functions to be performed on the SIMD architecture.

At step 720, if the instruction corresponds to a multiply-reduce mode, the SIMD architecture selects a data stream to provide a multiply-reduce function (e.g., as shown in FIG. 6A). In particular, the multiplicative reduction data stream uses a set of connections involving N multipliers and N-1 adders, where one of the adders is not used. For example, as shown in FIG. 6A, adder A4 is disconnected and adders A1-A3 are connected to perform the multiplicative reduction operation. In the illustration, multiplier M1 accepts two operands op1 and op2 to generate an output operand to adder a 1. Multipliers M2, M3, and M4 are similarly equipped to operate the same as M1 and provide output operands to their respective adders. For example, multipliers M1 and M2 provide output operands to adder a1, while multipliers M3 and M4 provide output operands to adder a 2. Adders A1 and A2 may add their incoming operands and provide output operands to adder A3.

At step 725, if the instruction corresponds to a multiply-accumulate mode, the SIMD architecture selects a data stream to provide a multiply-accumulate function (e.g., as shown in FIG. 6B). In particular, the multiply-accumulate data stream uses a set of connections involving N multipliers and N adders. For example, as shown in FIG. 6B, each adder A1-A4 is connected after the multiplier. The multiplier M1 accepts two operands opl and op2 to generate a result operand R16. The result operand R16 and operand op3 are provided as operands to adder A1. The result from a1 may proceed to another array (not shown). Similarly, the multipliers M2-M4 accept a set of operands, and the result operands from each of M2-M4 are input to adders A3, A2, and A4, respectively, as first operands of A3, A2, and A4, respectively. Each of A2-A4 accepts a second operand, and the result operands may proceed to other arrays (not shown).

After steps 720 or 725, the method 700 may end at 730. It should be understood that the SIMD architecture may operate in its indicated mode until the SIMD architecture receives a configuration instruction to change modes.

It should be appreciated that the global manager of the NPU architecture may use software to control the configuration of the SIMD architecture described above. For example, the global manager may send instructions to the tiles or cores to change the configuration mode of the multipliers and adders. The software may be stored on a non-transitory computer readable medium. Common forms of non-transitory media include, for example, a floppy disk, a flexible disk, hard disk, solid state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM or any other FLASH memory, an NVRAM, a cache, registers, any other memory chip or cartridge, and network versions thereof.

In the foregoing specification, embodiments have been described with reference to numerous specific details that may vary from implementation to implementation. Certain adaptations and modifications of the described embodiments can be made. Other embodiments may be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims. The order of steps shown in the figures is also intended for illustrative purposes only and is not intended to be limited to any particular order of steps. As such, those skilled in the art will appreciate that steps may be performed in a different order while performing the same method.

15页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:分布式冗余存储系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类