Computationally efficient implementation of simulated neurons

文档序号:1942819 发布日期:2021-12-07 浏览:12次 中文

阅读说明:本技术 模拟神经元的计算高效实施方式 (Computationally efficient implementation of simulated neurons ) 是由 B.普钦格 E.哈塞尔斯泰纳 F.梅尔 B.米尼克霍夫 G.普罗米策 P.扬舍尔 于 2020-04-29 设计创作,主要内容包括:一种设备包括模拟神经网络、数字控制器和存储器装置。模拟神经网络可以包括具有多个神经元的第一层。多个神经元可以被重复使用以形成模拟神经网络的第二层。每个神经元可以有多个输入。数字控制器可以耦合到模拟神经网络,并且可以为多个输入中的每个输入提供权重。存储器装置可以耦合到数字控制器,以存储多个输入中的每个输入的权重。(An apparatus includes an analog neural network, a digital controller, and a memory device. The simulated neural network may include a first layer having a plurality of neurons. Multiple neurons may be reused to form a second layer of the simulated neural network. There may be multiple inputs per neuron. The digital controller may be coupled to the analog neural network and may provide a weight for each of the plurality of inputs. A memory device may be coupled to the digital controller to store a weight for each of the plurality of inputs.)

1. An apparatus, comprising:

a simulated neural network comprising a first layer having a plurality of neurons configured to be reused to form a second layer simulating the neural network, each neuron having a plurality of inputs;

a digital controller coupled to the analog neural network to provide a weight for each of the plurality of inputs; and

a memory device coupled to the digital controller to store a weight for each of the plurality of inputs.

2. The apparatus of claim 1, further comprising an analog-to-digital converter coupled to the analog neural network to receive an analog output of the analog neural network, the analog-to-digital converter configured to convert the analog output of the analog neural network to a digital output compatible with the digital controller, the analog-to-digital converter coupled to the digital controller to send the digital output to the digital controller.

3. The apparatus of claim 1 or 2, wherein the plurality of outputs of each neuron of the plurality of neurons comprises:

an analog input from a plurality of analog inputs from a sensor coupled to the analog neural network; and

an output of each neuron of the plurality of neurons.

4. The apparatus of claim 3, wherein:

the plurality of analog inputs are input to the first layer of a neural network; and

the count of the analog input is equal to the count of the neuron.

5. The apparatus of claim 4, wherein the plurality of analog inputs are multiplexed to form sets of parallel signals, each of the sets of parallel signals being processed sequentially by the analog neural network.

6. The apparatus of any one of the preceding claims, wherein each neuron comprises a first charge pump, a first operational amplifier, a second charge pump, and a second operational amplifier.

7. The apparatus of claim 6, wherein:

the first charge pump is coupled to the first operational amplifier;

the second charge pump is coupled to the second operational amplifier; and

the first operational amplifier is operable as a buffer to hold an output of the neuron when the neuron is associated with the first layer, and the second operational amplifier is operable as an integrator to calculate an output of the neuron when the neuron is reused as part of the second layer.

8. The apparatus of claim 7, wherein the second operational amplifier is operable to switch to operate as another buffer to preserve an output of the neuron when the neuron is reused as part of the second layer, and the first operational amplifier is operable to switch to operate as another integrator to calculate an output of the neuron when the neuron is further reused as part of a third layer of the neural network.

9. The apparatus of any one of claims 6-8, wherein each of the first operational amplifier and the second operational amplifier is coupled with an electronic component configured to perform bias compensation.

10. The apparatus of any one of claims 6-9, wherein each of the first operational amplifier and the second operational amplifier is coupled with an electronic component configured to amplify an output of the analog neural network.

11. The apparatus of any one of claims 6 to 10, wherein each neuron further comprises a clipping circuit configured to clip an output of one of the first operational amplifier and the second operational amplifier to maintain the output within a predetermined range of voltage values.

12. The apparatus of any of the preceding claims, wherein each neuron of the plurality of neurons has an electronic circuit comprising a plurality of switches, wherein opening and closing of each switch of the plurality of switches is controlled by the digital controller.

13. An apparatus for use in an electronic circuit of a simulated neuron of a neural network, the apparatus comprising:

a first operational amplifier configured to act as a buffer to preserve an output of the neuron when the neuron is associated with a first layer of the neural network; and

a second operational amplifier configured to act as an integrator to compute an output of the neuron when the neuron is reused as part of a second layer of the neural network.

14. The apparatus of claim 13, further comprising:

a first charge pump coupled to the first operational amplifier, the first charge pump configured to change a voltage at an input of the first operational amplifier; and

a second charge pump coupled to the second operational amplifier, the second charge pump configured to change a voltage at an input of the second operational amplifier.

15. The apparatus of claim 13 or claim 14, further comprising a clipping circuit configured to clip an output of one of the first operational amplifier and the second operational amplifier to maintain the output within a predetermined range of voltage values.

16. The apparatus of any one of claims 13 to 15, wherein:

the second operational amplifier is configured to switch to act as another buffer to preserve an output of the neuron when the neuron is reused as part of the second layer; and

the first operational amplifier is configured to switch to act as another integrator to compute an output of the neuron when the neuron is further reused as part of a third layer of the neural network.

17. The apparatus of any one of claims 13-16, wherein each of the first and second operational amplifiers is coupled with an electronic component configured to perform bias compensation.

18. The apparatus of any one of claims 13-18, wherein each of the first and second operational amplifiers is coupled with an electronic component configured to amplify an output of the analog neural network.

Technical Field

The subject matter described herein relates to computationally efficient implementations of simulated neurons that can be reused in multiple layers of a neural network.

Background

Artificial neural networks (referred to herein simply as neural networks) are computing systems for machine learning. Neural networks may be based on a layer of connected nodes called neurons that can loosely model neurons in the biological brain. There may be multiple neurons per layer. Neurons between different layers are connected via connectors that correspond to synapses in the biological brain. A neuron in a first layer may send a signal to another neuron in another layer via a connection between the two neurons. The signal sent on the connection may be real. Another neuron of another layer may process the received signal (i.e., real number) and then send the processed signal to another neuron. The output of each neuron may be calculated by some non-linear function based on the input of that neuron. Each connection may have a weight that may adjust the signal before processing the signal and calculating the output.

Conventionally, such neural networks are typically implemented digitally. In other words, while analog computation may be more efficient than digital computation, neural networks are typically not implemented in analog form, as described below. Neural networks have many layers, each requiring many neurons. Conventional analog neural networks require a separate electronic component for each neuron in the neural network. As the number of layers increases, the number of electronic components required to implement an analog neural network also increases, which in turn may require a large amount of computational requirements, including power, space, processing, and memory requirements. Therefore, analog neural networks are rare and traditionally ineffective when present.

Disclosure of Invention

In one aspect, an apparatus is described that includes an analog neural network, a digital controller, and a memory device. The simulated neural network may include a first layer having neurons. The neurons may be reused to form a second layer of the simulated neural network. Each neuron may have an input. The digital controller may be coupled to the analog neural network to provide a weight for each of the inputs. A memory device may be coupled to the digital controller and may store a weight for each input.

In some implementations, one or more of the following features may be present. For example, the apparatus may include an analog-to-digital converter coupled to the analog neural network to receive an analog output of the analog neural network. The analog-to-digital converter may convert an analog output of the analog neural network to a digital output compatible with the digital controller. The analog-to-digital converter may be coupled to the digital controller to send the digital output to the digital controller.

The output of each neuron may include: an analog input from among analog inputs of sensors coupled to the analog neural network; and the output of each neuron. The analog input may be input to a first layer of the neural network. The count of analog inputs may be equal to the count of neurons. The analog inputs may be multiplexed to form sets of parallel signals (e.g., six sets of parallel signals, where each set includes eight signals). Each of the sets of parallel signals may be processed sequentially by the analog neural network.

Each neuron may include a first charge pump, a first operational amplifier, a second charge pump, and a second operational amplifier. The first charge pump may be coupled to the first operational amplifier. The second charge pump may be coupled to the second operational amplifier. The first operational amplifier is operable as a buffer to hold an output of the neuron when the neuron is associated with the first layer, and the second operational amplifier is operable as an integrator to calculate an output of the neuron when the neuron is repeatedly used as part of the second layer. The second operational amplifier is operable to switch to operate as another buffer to preserve the output of the neuron when the neuron is reused as part of the second layer, and the first operational amplifier is operable to switch to operate as another integrator to compute the output of the neuron when the neuron is further reused as part of the third layer of the neural network.

Each of the first operational amplifier and the second operational amplifier may be coupled with an electronic component configured to perform bias compensation. Each of the first operational amplifier and the second operational amplifier may be coupled with an electronic component configured to amplify an output of the analog neural network. Each neuron may further include a clipping circuit configured to clip an output of one of the first operational amplifier and the second operational amplifier to maintain the output within a predetermined range of voltage values.

Each of the neurons may have an electronic circuit comprising a switch. The opening and closing of each switch in the switch may be controlled by a digital controller.

In another aspect, an apparatus for an electronic circuit of a simulated neuron of a neural network is described. Such a device may include: a first operational amplifier configured to act as a buffer to preserve an output of a neuron when the neuron is associated with a first layer of a neural network; and a second operational amplifier configured to act as an integrator to compute an output of the neuron when the neuron is reused as part of a second layer of the neural network.

Some implementations include one or more of the following features. For example, the apparatus may include a first charge pump and a second charge pump. The first charge pump may be coupled to the first operational amplifier. The first charge pump may vary a voltage at an input of the first operational amplifier. The second charge pump may be coupled to the second operational amplifier. The second charge pump may be configured to vary a voltage at an input of the second operational amplifier. The apparatus may also include a clipping circuit that may clip an output of one of the first operational amplifier and the second operational amplifier to maintain the output within a predetermined range of voltage values.

The second operational amplifier may be configured to switch to act as another buffer to preserve the output of the neuron when the neuron is reused as part of the second layer. The first operational amplifier may be configured to switch to act as another integrator to compute the output of the neuron when the neuron is further reused as part of a third layer of the neural network. Each of the first operational amplifier and the second operational amplifier may be coupled with an electronic component to perform bias compensation. Each of the first operational amplifier and the second operational amplifier may be coupled with an electronic component configured to amplify an output of the analog neural network.

Some embodiments provide one or more of the following advantages. The reuse of neurons within the neural network may minimize the number of electronic components used within the electronic circuitry of the neural network, thereby minimizing the number of computational requirements, including power, space, processing, and memory requirements. Furthermore, the neural network can achieve very high parallelism, since all neurons can work simultaneously. In addition, the neural network can process data quickly because the neural network performs computation as a simple simulation operation. Furthermore, because the neural network is data efficient to process, the neurons may require lower power and other computational resources such as processing power, memory, and the like.

The details of one or more implementations are set forth below. Other features and advantages will be apparent from the detailed description, the drawings, and the claims.

Drawings

Fig. 1 shows an example of an electronic chip with an analog neural network.

Fig. 2 shows a portion of the layers of a neural network.

Figure 3 shows an example of an electronic circuit for forming neurons within a neural network.

Figure 4 shows a digital controller for each neuron.

Fig. 5 shows a timing diagram illustrating the functionality of the digital controller.

FIG. 6 shows another timing diagram for computing two layers for a neuron represented by the electronic circuit.

Fig. 7 shows a circuit portion including a charge pump and an integrator within an electronic circuit of a neuron.

Fig. 8 shows an electronic circuit with an electronic component addition added to the circuit portion of fig. 7 to perform bias compensation.

Fig. 9 shows an electronic circuit with an electronic component attachment attached to the circuit portion of fig. 8 to perform output amplification.

Fig. 10 shows a circuit portion that performs clipping of the output voltage within the electronic circuitry of the neuron.

Fig. 11 shows another circuit part performing clipping of the output voltage.

Detailed Description

Fig. 1 shows an electronic chip 102 having an analog neural network 104 including neurons, a digital controller 106 that can digitally control electronic circuitry of the analog neural network 104, an analog-to-digital converter (ADC) 108 that can convert the analog output (e.g., output voltage) of each neuron into a digital format compatible with the digital controller 106, and a memory device 110 that can store a corresponding weight for each input of each neuron.

The neural network 104 may include several layers, including an input layer, a hidden layer, and an output layer. The input layer of the neural network 104 may have analog sensor inputs 112. The simulated neural network 104 may classify the data in those inputs 112 into various categories in order to perform machine learning or artificial intelligence. Each neuron in the input layer of the neural network 104 may receive a corresponding single analog sensor input 112, as described below in fig. 2. Thus, the number of analog sensor inputs 112 may be the same as the number of neurons in the input layer. In the example shown, the input layer may have 49 neurons, so the number of analog sensor inputs 112 may also be 49. The circuitry of each neuron in the neural network 104 is described below by fig. 2 and 3.

The digital controller 106 may control the circuitry of each neuron of the neural network 104 (as shown in fig. 3).The digital controller 106 may control the circuit by controlling the opening and closing of switches in the circuit, as further described below with respect to fig. 4-6. Various aspects of the circuitry of the neuron, such as charge pump, integration, multiplication, and clipping, are described below by fig. 7-11. To control the circuitry, the digital controller 106 may also perform tasks such as: via a communication interface such as Inter-Integrated Circuit (I)2C) A communication protocol of a protocol that is a 2-wire bus protocol for communication between devices communicates with devices external to the electronic chip 102; activating (e.g., booting) the electronic chip 102; assigning a weight to each input of each neuron and providing it to the neuron; and writes the weight values into the memory device 110 using the communication protocol.

The ADC 108 may receive the analog output of each neuron (which may be a voltage, as shown in the electronic circuit in fig. 3) and may convert the analog output to a digital format compatible with the digital controller 106. The digital controller 106 may save the output. The digital output stored by the digital controller 106 may be converted back to an analog output, which may then be provided as an input to the same neuron that is now being used as the next layer of neurons. Because the same neurons can be used to perform multiple layers of computations (e.g., charge pump, integration, multiplication, and clipping), the neurons are efficiently reused, advantageously minimizing the use of analog components for creating the neural network 104.

A memory device 110 external to the neural network 104 may store a corresponding weight for each input of each neuron in the neural network 104. The storage of the weights in the memory device rather than in the electronic components of the circuit may advantageously minimize the use of analog components for creating the neural network 104. In the neural network 104, the output of each neuron (e.g., a particular neuron) may be connected to the input of each neuron (i.e., each neuron, including the particular neuron), as shown in fig. 2. Since each neuron also has one analog sensor input 112 as an input, the example shown with 49 neurons has 50 inputs (including the outputs of all 49 neurons and 1 analog sensor input 112). If the simulated neural network 104 has 50 layers of neurons (one of which has 49 neurons as shown), the memory device 110 requires space to store 50 layers x 50 weights per layer-2500 weights. If each weight requires 4 bits of storage space for storage, then in this example, the memory device 110 must have a minimum size of 2500 weights x 10 kilobits per weight of 1.25 kilobytes. Although in this example, the simulated neural network 104 has 49 neurons, the simulated neural network 104 has 50 layers of neurons, and each weight requires 4 bits of memory space for storage, in other examples, the simulated neural network 104 may have any other number of neurons, the simulated neural network 104 may have any other number of layers, and/or each weight may have any other memory requirement for storing such weights.

The arrows shown in the figure between the analog neural network 104, the digital controller 106, the analog-to-digital converter 108, and the memory device 110 indicate the electrical connections between those electronic components of the electronic chip 102.

Fig. 2 illustrates a portion 202 of the layers of the neural network 104, showing that each neuron 204/206 receives as inputs (a) one analog sensor input and (b) the output of each neuron 204 and 206. The analog sensor input to neuron 204 is 208, which is part of input 112 shown in FIG. 1. The analog sensor input to neuron 206 is 210, which is also part of input 112 shown in FIG. 1.

Fig. 3 shows an electronic circuit 302 forming a neuron 303 within the neural network 104. The neuron 303 is also referred to using reference numerals 204 and 206 in fig. 2. The electronic circuit 302 of the neuron 303 may include two integrators 304, two charge pumps 306, and a clipping circuit 308. The function of the electronic circuit 302 may be controlled by activation and deactivation of the switch, all starting with reference sign S in the figure. The switches may be controlled using a controller described below by fig. 4. A timing diagram explaining the function of such a controller and switch is described below by fig. 5. Another example of a timing diagram for neuron 303 is described below by fig. 6. A portion of the electronic circuit 302 (including a portion of the charge pump 306 and the integrator 304) is described below by fig. 7 to explain the function of the neuron. Another part of the electronic circuit 302, including the charge pump 306 and another part of the integrator 304, is described below by fig. 8 to explain the bias compensation obtained by the electronic circuit 302. Another portion of the electronic circuit 302 (including the charge pump 306 and yet another portion of the integrator 304) is described below by fig. 9 to describe the output amplification obtained by the circuit 302. The clipping circuit 308 is described below by fig. 10 to explain the clipping of the output voltage. An alternative embodiment of the clipping circuit 308 is described below by fig. 11.

To buffer the output (i.e., save the result Vout of the neuron 303 until such output is provided as an input to the neuron 303, which is then reused in another layer of the neural network 104), the architecture of the neuron 303 is designed such that each neuron 303 includes two integrators 304 (specifically 304a and 304b) and two charge pumps 306 (specifically 306a and 306b), as shown. When the first integrator 304a acts as an output buffer (i.e., when the neuron 303 is, for example, used for the first time as a neuron 303 in a first layer of the neural network 104, keeping the value of the output Vout generated by the neuron 303 constant), the second integrator 304b calculates the next output (i.e., Vout of the neuron 303 when the neuron 303 is repeatedly used as a neuron in, for example, a second time in a second (i.e., subsequent layer) of the neural network 104). Subsequently, the second integrator 304b is switched to act as a buffer to hold its calculated output, and the first integrator 304a is switched to calculate the next output.

For the integrator 304 acting as a buffer, the output of the integrator 304 is connected to the output of the neuron 303 through control switches S _12_1 and S _12_2, which in turn keep the output Vout constant. More specifically, if the operational amplifier OP1 is in the buffer mode, the switch S _12_1 is closed and the switch S _12_2 is open. Thus, the output Vout of the neuron is the output of an operational amplifier OP1, which operational amplifier OP1 is connected to one input of each neuron. The operational amplifier OP2 then calculates (i.e., integrates, multiplies, and clips) the next output value (e.g., Vout of the neuron when the neuron is reused in a subsequent layer of, for example, the neural network 104). Subsequently, the OP2 is switched to act as a buffer, and the output value of the operational amplifier OP2 is connected to the output by closing the switch S _12_2 and opening the switch S _12_ 1. The operational amplifier OP1 performs calculations (integration, multiplication and clipping) at this time.

In an alternative architecture of the neuron 303, the electronic circuit 302 may comprise a single integrator circuit (instead of the two shown in the figure) and an additional buffer circuit with a sampling capacitor. However, in at least some cases, such alternative electronic circuits may not be suitable because additional offset errors may be introduced that are not easily eliminated.

Because the neural network 104 is an analog neural network, the computations required for the propagation of data through the network 104 are accomplished at least in part as analog computations, without the need for a digital processor. This may provide the following advantages over using a digital neural network: (a) because all neurons 303 can operate simultaneously with very high parallelism, (b) because the computation is a simple analog operation that performs quickly, and (c) because of efficient data processing that consumes low power.

Figure 4 shows a digital controller 402 for each neuron 303 to control switches in the electronic circuit 302 forming the neuron 303 within the neural network 104. Each neuron 303 in a layer of the neural network 104 may have a separate digital controller 402, the digital controller 402 generating control signals for all switches of that neuron 303, and such digital controller 402 may be part of the digital controller 106. However, in alternative embodiments, all of the neurons of a layer may be coupled to the same digital controller 402, which digital controller 402 may be the same as digital controller 106. For example, for the two neurons 204 and 206 in fig. 2, each of the two neurons has 3 analog inputs — the first input is the analog sensor input 112, the second input is the output of that neuron, the third input is the output of the other neuron- (N ═ 3), and 8 digital inputs, including the trigger (toggle) signal 404, the clock signal 406, the pulse _ en signal 408 for each analog sensor input 112, and the sign signal 410 for each analog sensor input 112. Based on the digital input, the digital controller 402 may activate and/or deactivate one or more switches (shown on the right side of the digital controller 402 in the figure). Activation and deactivation of the switches by the digital controller 402 in response to an input is illustrated by the timing diagram of fig. 5 below.

Fig. 5 shows a timing diagram 502 that illustrates the function of the digital controller 402 to control the switches in the electronic circuit 302 that forms the neuron 303.

The trigger input 404 may cause triggering of the functions of the operational amplifier OP1 and OP 2-i.e., first, the operational amplifier OP1 is set to the integration mode and the operational amplifier OP2 is used to buffer the previously calculated output; and with the next trigger pulse, the operational amplifier OP2 enters the integration mode and the operational amplifier OP1 enters the buffer mode; and this triggering continues with the subsequent triggering phase. During the time that the trigger signal 404 is high, the operational amplifier that is to be used next time as an integrator is in the reset mode, i.e., the output of such operational amplifier is set to 0 volts.

The number of clock pulses on clock line 406 may define the most likely weight to be applied. Thus, in the example shown, the maximum possible weight is 7. The pulse width of the pulse _ en signal 408 is defined by the weight. For example, if the digital controller 402 indicates that the weight of input 1 of "neuron 1" is 3, the pulse _ en _1 signal will be high for 3 clock cycles, which causes the charge to be pumped 3 times into the integrating capacitor C _ int.

The digital controller 402 may set the sign signal 410 with the falling edge of the trigger signal 404. If the sign pulse 410 is 1, the value is negative; and if the sign pulse 410 is 0, the value is positive. The sign bit 410 may control (e.g., by determining) whether the charge pump capacitor C _ cp is precharged at the input voltage (with switches S1 and S2 closed and switches S3 and S4 open) or at 0 volts (with switches S1 and S4 closed and switches S2 and S3 open).

In the example shown, the first analog input has a weight of-3 (which is represented by 3 pulses and sign ═ 1), the second analog input has a weight of +6 (which is represented by 6 pulses and sign ═ 0), and the third analog input has a weight of-5 (which is represented by 5 pulses and sign ═ 1).

After the multiplication phase, the integration phase begins when the output of the current layer is stable (i.e., the output of the previous layer becomes stable). This means that the charge pump 306 pulses charge into the integrating capacitor 304 as long as the pulse _ en signal 408 is high. For example, at input 1, the symbol is 1, which means that the capacitor C _ cp is precharged at 0 volts (with switches S1 and S4 closed and switches S2 and S3 open). During the time that the clock signal 406 is high, switches S1 and S4 are open and switches S2 and S3 are closed, which causes charge to be transferred into the integrating capacitor C int, which causes the integrator output voltage Vout to decrease. In the next low phase, the charge pump capacitor C _ cp is again charged at 0 volts. This process is repeated as long as the pulse _ en1 signal 408 is high. The pulse _ en signal 408 is generated by the digital controller 106.

Fig. 6 shows another timing diagram 602 for computing two layers for a neuron represented by the electronic circuit 302. As described herein, the computation of a layer refers to the integration of all input signals, multiplying the integrated signals by their weights, and clipping the result if it exceeds a positive or negative reference voltage. In the first layer, the operational amplifier OP1 may be in an integration mode and the operational amplifier OP2 may be in a buffer mode.

In the reset phase, the integrating capacitor C _ int1 and the multiplying capacitor C _ mult1 of the operational amplifier OP1 are charged with a bias voltage. This may be accomplished by closing switches S5_1, S7_1, and S9_ 1.

After the reset phase, the operational amplifier OP2 can be switched to multiplication mode by disconnecting the integrating capacitor C _ int2 from the output and connecting it to the reference voltage while connecting C _ mult2 to the output. This is done by opening switch S6_2 and closing switches S7_2, S8_2 and S12_ 2. At the same time, the clipping circuit 308 is activated, which is accomplished by closing switches S10_2 and S11_2 to clip the output if it exceeds the positive or negative clipping reference.

When the multiplication phase ends, the integration phase of the operational amplifier OP1 and the buffering phase of the operational amplifier OP2 begin. Depending on the sign of the weight of each of the N inputs, switches S1_1, S2_1, S3_1 and S4_1 can be manipulated to push input-related charge into or out of the integrating capacitor C _ int 1. The weight may indicate the number of pulses applied on each input.

After the integration phase is completed, the operational amplifier OP2 enters the reset mode, which means that the integration capacitor C _ int2 of the operational amplifier OP2 is discharged, so that it can assume the integration function of the next layer. The multiplication phase of the operational amplifier OP1 then begins. By opening the switch S6_1 and closing the switches S7_1 and S8_1, the charge stored in the integrating capacitor C _ int1 can be transferred into the multiplying capacitor C _ mult 1. This multiplies the output by the coefficient C _ int _1/C _ mult _ 1. Also during the multiplication phase, the clipping circuit 308 is activated by closing switches S10_1 and S11_ 1. The operational amplifier OP1 then enters a buffer mode and the operational amplifier OP2 begins integrating.

When the integration is completed, the operational amplifier OP1 changes to the reset mode again, and so on.

Fig. 7 shows a circuit portion 702 within the electronic circuit 302 of the neuron, which includes the charge pump 306 and an integrator 704 (formed using an operational amplifier). The integrator 704 is part of the integrator 304a shown in fig. 3. Other portions of the integrator 304a (which are not part of the circuit 704) are added to the integrator 704 (discussed below) in fig. 8 and 9 to describe other functions of the integrator 304 a.

The integrator 704 integrates (e.g., sums) the charge transferred by the charge pump 306. The charge pump 306 is a Direct Current (DC) to DC converter that uses a capacitor C _ cp for high energy charge storage to step up or down the voltage. The integrator 704 may be a current integrator, which is an electronic device that performs a time integration of the current, thereby measuring the total charge. In some embodiments, any integrator described herein (e.g., integrator 704 or 304) may also be referred to as a multiplier or adder.

In the charge pump circuit 306, Vref refers to a reference voltage, Vin refers to an input voltage, C _ cp refers to a charge pump capacitor, V _ cp refers to a voltage difference across the charge pump capacitor C _ cp, and S1, S2, S3, and S4 refer to switches. In the integrator circuit 704, OP1 refers to an operational amplifier used as an integrator by combining it with other electrical components of the integrator circuit 704, Vout refers to an output voltage, C _ int refers to an integration capacitor, Vref refers to a reference voltage, and S5 refers to a switch.

The integration capacitor C int is configured to store the charge accumulated using the charge pump 306. The integration capacitor C int may be discharged (e.g., reset) by closing switch S5. Depending on the sign 410 of the weight, during the time the integration is performed (i.e. the integration period), charge may be added or subtracted to the integration capacitor C int, i.e. by charging or discharging the integration capacitor. To add charge to the integrating capacitor C int, switches S1 and S4 are closed and switches S2 and S3 are opened, thereby initially discharging the charge pump capacitor C cp.

Subsequently, switches S1 and S4 are opened, and switches S2 and S3 are closed, such that one side of the capacitor C _ cp is connected to the negative input of the operational amplifier OP1, and the other side of the charge pump capacitor C _ cp is connected to the neuron input Vin. Furthermore, the negative input of the operational amplifier 204 increases, which results in a change in the integrator output voltage Vout and in a corresponding change in the current from the neuron input Vin through the charge pump capacitor C _ cp and the integration capacitor C _ int. Then the voltage across the charge pump capacitor C cp has a value V cp-Vin and the charge Q cp-Vin C cp. This means that charge of Q Vin Cp has been transferred and added to the integrating capacitor C int.

The process operates in another way for the subtraction of charge from the integrating capacitor C int. First, switches S1 and S2 are closed and switches S3 and S4 are open, thereby precharging the charge pump capacitor C _ cp with Q _ cp-Vin × C _ cp. Subsequently, switches S1 and S2 are opened and switches S3 and S4 are closed, causing the charge pump capacitor C _ cp to be connected between the negative input of the operational amplifier OP1 and the positive input Vref of the operational amplifier OP1, which causes the charge pump capacitor C _ cp to discharge and transfer the charge of Q-Vin x C _ cp into the integration capacitor C _ int.

Because the neuron has many inputs, the capacitance of the integration capacitor C _ int must be much larger than the charge pump capacitor C _ cp to avoid charge overflow in the integration capacitor C _ int. This means that the output voltage Vout varies only for Vin C _ cp/C _ int of each charge pump pulse of one input. The pulses of the charge pump 306 are used to apply weights to the various inputs such that the weights are equal to the number of charge pump pulses. See discussion below regarding fig. 9 for an example of a quantized value of C _ int relative to a quantized value of C _ cp.

Fig. 8 shows an electronic circuit 802 having an electronic component attachment 804 attached to the circuit portion 702 of fig. 7 to perform bias compensation. Switch S5 is closed to reset the integrating capacitor C int as described above in fig. 7. During such reset, the output Vout of the operational amplifier OP1 stabilizes to a bias voltage. When the switch S6 is open and the switch S7 is closed, one side of the integrating capacitor C _ int is connected to the reference voltage Vref, and the other side is connected to the output Vout of the operational amplifier OP1 via the closed switch S5. Therefore, the bias voltage is stored in the integrating capacitor C _ int. Subsequently, the reset switch S5 is opened, the switch S6 is closed, and the switch S7 is opened, all of which results in one side of the integrating capacitor C _ int being connected to the output Vout of the operational amplifier OP 1. At this point, the bias voltage stored across C int forces the output Vout to be lower than the negative input of the operational amplifier OP1, thereby compensating for the bias voltage at the output.

Each neuron may have "N" inputs Vin, as shown, where each input Vin may add or subtract charge to or from an integrating capacitor C int corresponding to the input. To save area and reduce the number of necessary control signals, N input Vin may be multiplexed. Thus, only a small number of inputs Vin and charge pumps 306 may be applied in parallel. The "N" inputs Vin may be stepped by using one or more multiplexers.

Fig. 9 shows an electronic circuit 902 having an electronic component add-on 904 added to the circuit portion 802 of fig. 8 to perform output amplification.

All input Vin may be applied partially in parallel, rather than fully in parallel, and the input signals are multiplexed. For example, if the number of charge pumps 306 is 8 (i.e., N-8) and the number of neurons is 47, then each neuron has 48 inputs, including one analog sensor input 112 and 47 neuron outputs (i.e., one output from each of the 47 neurons). These 48 inputs can be multiplexed into 6 by 8 parallel signals. Thus, the input is sequentially delivered to a set of 6 8 charge pumps 306. Thus, during integration, the integration computation takes 6 times as much time as processing all inputs in parallel (since inputs 1-8 are processed first, then inputs 9-16, and so on).

Due to this serial (i.e. sequential implementation) the ratio between the integrating capacitor C int and the charge pump capacitor C cp must be very high to avoid the integrator running into the track (rail) (which would result in inadequate clipping of the output). The problem of the integrator running into orbit (i.e. the clipping of the output is inadequate) is now described by way of example. For example, if the inputs 1-8 have high positive input values and high positive weights applied such that the sum of the input x weights is 2.5V and the power supply is only 1.8V, the output should clip to 1.8V. If the integrator output (i.e. the sum of the input x weights) needs to be reduced by 0.3V after the next set of inputs 9-16, then the calculated output is not an accurate value of 2.2V (0.3V below the unclipped output from the previous set of 2.5V) but 1.5V (0.3V below the clipped output from the previous set of 1.8V), which is clearly inaccurate. To avoid such a problem, the ratio of C _ int/C _ cp may be 48/1, for example, to give enough margin to obtain accurate results. It is also noted that the ratio may vary with the number of input groups and the number of inputs in each input group. In another example where all 48 inputs form a single group, such that all inputs are parallel and not multiplexed, then the ratio of C _ int/C _ cp may be 5/1 or 10/1.

When the inputs are divided into 6 groups of 8 inputs, the ratio C _ int/C _ cp of 48/1 represents one pulse of one input divided by the factor 48. For example, with a maximum weight of 7, this input with a voltage of Vin results in a maximum output of 7/48 Vin. The output Vout needs to be amplified because the damping of the input is high. Additional electronic component add-ons 904 enable such amplification, as described below.

When all inputs Vin have been processed, switch S6 is open and switch S7 is closed, which connects the integrating capacitor C int to the reference voltage.

Switch S8 is closed and switch S9 remains open. Then, all the accumulated charge in the integrating capacitor C _ int is transferred to the C _ mult capacitor, which causes the output voltage Vout to increase by a factor of C _ int/C _ mult. The ratio of C _ int/C _ mult may be 48/7, which means that the maximum input of 7/48 Vin is multiplied by 48/7, resulting in 7/48 Vin 48/7 Vin-1 Vin at the maximum weight. Note that because the C _ mult capacitance is much smaller than C _ int, the increase in voltage Vout, and thus amplification, can be large.

Fig. 10 shows a circuit portion 1002 within the electronic circuit 302 of the neuron, which performs clipping of the output voltage Vout. Circuit portion 1002 includes a variation of clip circuit 308. The clipper (or clipping circuit) 1002 is an electronic circuit designed to prevent the output voltage from exceeding a range defined by a positive reference voltage Ref _ p and a negative reference voltage Ref _ n. Here, the comparators comp1 and comp2 compare the output voltage Vout of the integrator 304 with the positive reference voltage Ref _ p and the negative reference voltage Ref _ n. A comparator is a device that compares two voltages or currents and outputs a digital signal indicative of the greater voltage (or alternatively, in other embodiments, the greater current). The positive reference voltage Ref _ p and the negative reference voltage Ref _ n represent the boundaries of the clipping function. After the integration or multiplication phase, when the output of integrator 304 is stable (i.e., when the output is stable), the digital pulse on the out _ en pin selects the particular output switch to be closed based on the result of the comparison of comparators comp1 and comp 2. If the output of integrator 304 is higher than Ref _ p, the output of comp1 will be high and accordingly, the switch to Ref _ p will be closed. If the output of integrator 304 is less than Ref _ n, the output of comparator comp2 is high, and thus the switch to Ref _ n will be closed. If the integrator output is between reference voltages Ref _ p and Ref _ n, then both the outputs of comparators comp1 and comp2 are 0, so the switch to the output of integrator 304 is closed.

Fig. 11 shows a circuit portion 1102 that performs clipping of the output voltage, which is an alternative to the circuit portion 1002 of fig. 10. The clipping circuit 1102 clips the output voltage at a particular positive and negative voltage. This clipping may generate an activation function for the neuron. This also serves to keep the output signal, which is used as the input signal for the next loop, low in order to further increase the margin of the neuron. Clipping circuit 1102 includes two comparators comp1 and comp2, wherein the positive pin is connected to output voltage Vout, and wherein the negative pin is connected to a positive reference voltage Ref _ p and a negative reference voltage Ref _ n. The outputs of comparators comp1 and comp2 are connected to the negative input of the amplifier via diode D. If the neuron output is above the positive reference voltage, the output of comp1 goes high and charges capacitor C _ mult through the diode until the output voltage Vout equals the positive reference voltage Ref _ p. For negative clipping, it operates in the same or similar manner as comparator comp 2.

Various embodiments of the subject matter described herein can be implemented in digital electronic circuitry, integrated circuitry, specially designed Application Specific Integrated Circuits (ASICs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may be implemented in one or more computer programs. These computer programs may be executed and/or interpreted on programmable systems. A programmable system may include at least one programmable processor, which may be special or general purpose. The at least one programmable processor may be coupled to a memory system, at least one input device, and at least one output device. The at least one programmable processor can receive data and instructions from, and can send data and instructions to, the storage system, the at least one input device, and the at least one output device.

These computer programs (also known as programs, software applications or code) may include machine instructions for a programmable processor, and may be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the term "machine-readable medium" can refer to any computer program product, apparatus and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that can receive machine instructions as a machine-readable signal. The term "machine-readable signal" may refer to any signal used to provide machine instructions and/or data to a programmable processor.

Although various embodiments have been described in detail above, other modifications are possible. For example, the logic flows depicted in the figures and described herein do not require the particular order shown, or sequential order, to achieve desirable results. Other implementations are within the scope of the following claims.

23页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:知识图中过指定和欠指定的自动解析

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!