Precision tuning of programming of analog neural memory in deep learning artificial neural networks

文档序号:517842 发布日期:2021-05-28 浏览:18次 中文

阅读说明:本技术 深度学习人工神经网络中的模拟神经存储器的编程的精确调谐 (Precision tuning of programming of analog neural memory in deep learning artificial neural networks ) 是由 H·V·特兰 S·莱姆克 V·蒂瓦里 N·多 M·雷顿 于 2019-07-25 设计创作,主要内容包括:本发明公开了用于将正确的电荷量精确快速地沉积在人工神经网络中的矢量-矩阵乘法(VMM)阵列内非易失性存储器单元的浮栅上的精确调谐算法和装置的多个实施方案。因此,可极精确地对所选单元进行编程,以保持N个不同值中的一个值。(Embodiments of a precision tuning algorithm and apparatus for accurately and quickly depositing correct amounts of charge on the floating gates of non-volatile memory cells within a vector-matrix multiplication (VMM) array in an artificial neural network are disclosed. Thus, the selected cell can be programmed very accurately to hold one of the N different values.)

1. A method of programming a selected non-volatile memory cell to store one of N possible values, where N is an integer greater than 2, the selected non-volatile memory cell including a floating gate, the method comprising:

performing a coarse programming process comprising:

selecting one of M different current values as a first threshold current value, where M < N;

adding charge to the floating gate; and is

Repeating the adding step until a current through the selected non-volatile memory cell is less than or equal to the first threshold current value during a verify operation; and

a precise programming process is performed until a current through the selected non-volatile memory cell is less than or equal to a second threshold current value during a verify operation.

2. The method of claim 1, further comprising:

a second precise programming process is performed until a current through the selected non-volatile memory cell is less than or equal to a third threshold current value during a verify operation.

3. The method of claim 1, wherein the precise programming process comprises applying voltage pulses of increasing magnitude to control gates of the selected non-volatile memory cells.

4. The method of claim 1, wherein the precise programming process comprises applying voltage pulses of increasing duration to the control gates of the selected non-volatile memory cells.

5. The method of claim 2, wherein the second precise programming process comprises applying voltage pulses of increasing magnitude to the control gates of the selected non-volatile memory cells.

6. The method of claim 2, wherein the second precise programming process comprises applying voltage pulses of increasing duration to the control gates of the selected non-volatile memory cells.

7. The method of claim 1, wherein the selected non-volatile memory cell comprises a floating gate.

8. The method of claim 7, wherein the selected non-volatile memory cell is a split gate flash memory cell.

9. The method of claim 1, wherein the selected non-volatile memory cell is in a vector-matrix multiplication array in an analog memory deep neural network.

10. The method of claim 1, further comprising:

prior to performing the coarse programming process:

programming the selected non-volatile memory cell to a "0" state; and

erasing the selected non-volatile memory cell to a weak erase level.

11. The method of claim 1, further comprising:

prior to performing the coarse programming process:

erasing the selected non-volatile memory cell to a "1" state; and

programming the selected non-volatile memory cell to a weak programming level.

12. The method of claim 1, further comprising:

performing a read operation on the selected non-volatile memory cell;

integrating the current consumed by the selected non-volatile memory cell during the read operation using an integrating analog-to-digital converter to generate a digital bit.

13. The method of claim 1, further comprising:

performing a read operation on the selected non-volatile memory cell;

converting the current consumed by the selected non-volatile memory cell during the read operation to a digital bit using a sigma-delta type analog-to-digital converter.

14. A method of programming a selected non-volatile memory cell to store one of N possible values, where N is an integer greater than 2, the selected non-volatile memory cell including a floating gate and a control gate, the method comprising:

performing a coarse programming process comprising:

applying a first programming voltage to the control gate of the selected non-volatile memory cell;

applying a first value of current through the selected non-volatile memory cell and determining a first value of the control gate;

applying a current of a second value through the selected non-volatile memory cell and determining a second value of the control gate;

determining a slope value based on the first value and the second value;

determining a next program voltage value based on the slope value;

adding an amount of charge from the floating gate of the selected non-volatile memory cell until a current through the selected non-volatile memory cell during a verify operation is less than or equal to a first threshold current value; and

a precise programming process is performed until a current through the selected non-volatile memory cell is less than or equal to a second threshold current value during a verify operation.

15. The method of claim 14, further comprising:

a second precise programming process is performed until a current through the selected non-volatile memory cell is less than or equal to a third threshold current value during a verify operation.

16. The method of claim 14, wherein the precise programming process comprises applying voltage pulses of increasing magnitude to the control gates of the selected non-volatile memory cells.

17. The method of claim 14, wherein the precise programming process includes applying voltage pulses of increasing duration to the control gates of the selected non-volatile memory cells.

18. The method of claim 15, wherein the precise programming process comprises applying voltage pulses of increasing magnitude to the control gates of the selected non-volatile memory cells.

19. The method of claim 15, wherein the precision programming process comprises applying voltage pulses of increasing duration to the control gates of the selected non-volatile memory cells.

20. The method of claim 14, wherein the selected non-volatile memory cell comprises a floating gate.

21. The method of claim 20, wherein the selected non-volatile memory cell is a split gate flash memory cell.

22. The method of claim 14, wherein the selected non-volatile memory cell is in a vector-matrix multiplication array in an analog memory deep neural network.

23. The method of claim 14, further comprising:

prior to performing the coarse programming process:

programming the selected non-volatile memory cell to a "0" state; and

erasing the selected non-volatile memory cell to a weak erase level.

24. The method of claim 14, further comprising:

prior to performing the coarse programming process:

erasing the selected non-volatile memory cell to a "1" state; and

programming the selected non-volatile memory cell to a weak programming level.

25. The method of claim 14, further comprising:

performing a read operation on the selected non-volatile memory cell;

integrating the current consumed by the selected non-volatile memory cell during the read operation using an integrating analog-to-digital converter to generate a digital bit.

26. The method of claim 14, further comprising:

performing a read operation on the selected non-volatile memory cell;

converting the current consumed by the selected non-volatile memory cell during the read operation to a digital bit using a sigma-delta type analog-to-digital converter.

27. A method of programming a selected non-volatile memory cell to store one of N possible values, where N is an integer greater than 2, the selected non-volatile memory cell including a floating gate and a control gate, the method comprising:

performing a coarse programming process comprising:

applying a program voltage to the control gate of the selected non-volatile memory cell;

repeating the applying step and increasing the programming voltage by an incremental voltage each time the applying step is performed until a current through the selected non-volatile memory cell is less than or equal to the threshold current value during a verify operation; and

a precise programming process is performed until a current through the selected non-volatile memory cell is less than or equal to a second threshold current value during a verify operation.

28. The method of claim 27, further comprising:

a precise programming process is performed until a current through the selected non-volatile memory cell is less than or equal to a third threshold current value during a verify operation.

29. The method of claim 27, wherein the precision programming process comprises applying voltage pulses of increasing magnitude to the control gates of the selected non-volatile memory cells.

30. The method of claim 27, wherein the precision programming process includes applying voltage pulses of increasing duration to the control gates of the selected non-volatile memory cells.

31. The method of claim 28, wherein the precision programming process comprises applying voltage pulses of increasing magnitude to the control gates of the selected non-volatile memory cells.

32. The method of claim 28, wherein the precision programming process includes applying voltage pulses of increasing duration to the control gates of the selected non-volatile memory cells.

33. The method of claim 27, wherein the selected non-volatile memory cell comprises a floating gate.

34. The method of claim 33, wherein the selected non-volatile memory cell is a split gate flash memory cell.

35. The method of claim 27, wherein the selected non-volatile memory cell is in a vector-matrix multiplication array in an analog memory deep neural network.

36. The method of claim 27, further comprising:

prior to performing the coarse programming process:

programming the selected non-volatile memory cell to a "0" state; and

erasing the selected non-volatile memory cell to a weak erase level.

37. The method of claim 27, further comprising:

prior to performing the coarse programming process:

erasing the selected non-volatile memory cell to a "1" state; and

programming the selected non-volatile memory cell to a weak programming level.

38. The method of claim 27, further comprising:

performing a read operation on the selected non-volatile memory cell;

integrating the current consumed by the selected non-volatile memory cell during the read operation using an integrating analog-to-digital converter to generate a digital bit.

39. The method of claim 27, further comprising:

performing a read operation on the selected non-volatile memory cell;

converting the current consumed by the selected non-volatile memory cell during the read operation to a digital bit using a sigma-delta type analog-to-digital converter.

40. A method of reading a selected non-volatile memory cell that stores one of N possible values, where N is an integer greater than 2, the method comprising:

applying a digital input pulse to the selected non-volatile memory cell;

in response to each of the digital input pulses, determining a value stored in the selected non-volatile memory cell based on an output of the selected non-volatile memory cell.

41. The method of claim 40, wherein the number of digital input pulses corresponds to a binary value.

42. The method of claim 40, wherein the number of digital input pulses corresponds to a digital bit position value.

43. The method of claim 40, wherein the determining step comprises receiving an output neuron in an integral analog-to-digital converter to generate a digital bit indicative of the value stored in the non-volatile memory cell.

44. The method of claim 40, wherein the determining step comprises receiving an output neuron in a successive approximation register analog-to-digital converter to generate a digital bit indicative of the value stored in the non-volatile memory cell.

45. The method of claim 40, wherein the output is a current.

46. The method of claim 40, wherein the output is a charge.

47. The method of claim 40, wherein the output is a digital bit.

48. The method of claim 40, wherein the selected non-volatile memory cell comprises a floating gate.

49. The method of claim 48, wherein the selected non-volatile memory cell is a split gate flash memory cell.

50. The method of claim 40, wherein the selected non-volatile memory cell is in a vector-matrix multiplication array in an analog memory deep neural network.

51. A method of reading a selected non-volatile memory cell that stores one of N possible values, where N is an integer greater than 2, the method comprising:

applying an input to the selected non-volatile memory cell;

in response to the input, determining, using an analog-to-digital converter circuit, a value stored in the selected non-volatile memory cell based on an output of the selected non-volatile memory cell.

52. The method of claim 51, wherein the input is a digital input.

53. The method of claim 51, wherein the input is an analog input.

54. The method of claim 51, wherein the determining step comprises receiving an output neuron in an integral single-slope or dual-slope analog-to-digital converter and generating a digital bit indicative of the value stored in the non-volatile memory cell.

55. The method of claim 51, wherein the determining step includes receiving an output neuron in a SAR analog-to-digital converter to generate a digital bit indicative of the value stored in the non-volatile memory cell.

56. The method of claim 51, wherein the determining step comprises receiving an output neuron in a sigma-delta type analog-to-digital converter to generate a digital bit indicative of the value stored in the non-volatile memory cell.

57. The method of claim 51, wherein the selected non-volatile memory cell comprises a floating gate.

58. The method of claim 51, wherein the selected non-volatile memory cell is a split gate flash memory cell.

59. The method of claim 51, wherein the selected non-volatile memory cell is in a vector-matrix multiplication array in an analog memory deep neural network.

60. The method of claim 51, wherein the selected non-volatile memory cell operates in a sub-threshold region.

61. The method of claim 51, wherein the selected non-volatile memory cell operates in a linear region.

Technical Field

Embodiments of precision tuning algorithms and apparatus for accurately and quickly depositing correct amounts of charge on floating gates of non-volatile memory cells within a vector-matrix multiplication (VMM) array in an artificial neural network are disclosed.

Background

Artificial neural networks mimic biological neural networks (the central nervous system of animals, particularly the brain), and are used to estimate or approximate functions that may depend on a large number of inputs and are generally unknown. Artificial neural networks typically include layers of interconnected "neurons" that exchange messages with each other.

FIG. 1 illustrates an artificial neural network, where circles represent the inputs or layers of neurons. Connections (called synapses) are indicated by arrows and have a numerical weight that can be adjusted empirically. This enables the neural network to adapt to the input and to learn. Typically, a neural network includes a layer of multiple inputs. There are typically one or more intermediate layers of neurons, and an output layer of neurons that provides the output of the neural network. Neurons at each level make decisions based on data received from synapses, either individually or collectively.

One of the major challenges in developing artificial neural networks for high-performance information processing is the lack of adequate hardware technology. In practice, practical neural networks rely on a large number of synapses, thereby achieving high connectivity between neurons, i.e., very high computational parallelism. In principle, such complexity may be achieved by a digital supercomputer or a dedicated cluster of graphics processing units. However, these methods are also energy efficient, in addition to high cost, compared to biological networks, which consume less energy mainly because they perform low precision analog calculations. CMOS analog circuits have been used for artificial neural networks, but most CMOS-implemented synapses are too bulky given the large number of neurons and synapses.

Applicants previously disclosed an artificial (simulated) neural network that utilized one or more non-volatile memory arrays as synapses in U.S. patent application No. 15/594,439, which is incorporated herein by reference. The non-volatile memory array operates as a simulated neuromorphic memory. The neural network device includes a first plurality of synapses configured to receive a first plurality of inputs and generate a first plurality of outputs therefrom, and a first plurality of neurons configured to receive the first plurality of outputs. The first plurality of synapses comprises a plurality of memory cells, wherein each of the memory cells comprises: spaced apart source and drain regions formed in the semiconductor substrate, wherein a channel region extends between the source and drain regions; a floating gate disposed over and insulated from a first portion of the channel region; and a non-floating gate disposed over and insulated from the second portion of the channel region. Each memory cell of the plurality of memory cells is configured to store a weight value corresponding to a plurality of electrons on the floating gate. The plurality of memory units is configured to multiply the first plurality of inputs by the stored weight values to generate a first plurality of outputs.

Each non-volatile memory cell used in an analog neuromorphic memory system must be erased and programmed to maintain a very specific and precise amount of charge (i.e., number of electrons) in the floating gate. For example, each floating gate must hold one of N different values, where N is the number of different weights that can be indicated by each cell. Examples of N include 16, 32, 64, 128, and 256.

One challenge in VMM systems is being able to program selected cells with the precision and granularity required for different values of N. For example, extreme precision is required in the programming operation if the selected cell can include one of 64 different values.

What is needed is an improved programming system and method suitable for use with a VMM in an emulated neuromorphic memory system.

Disclosure of Invention

Embodiments of a precision tuning algorithm and apparatus for accurately and quickly depositing correct amounts of charge on the floating gates of non-volatile memory cells within a vector-matrix multiplication (VMM) array in an artificial neural network are disclosed. Thus, the selected cell can be programmed very accurately to hold one of the N different values.

Drawings

Fig. 1 is a schematic diagram illustrating an artificial neural network.

Figure 2 illustrates a prior art split gate flash memory cell.

Figure 3 illustrates another prior art split gate flash memory cell.

Figure 4 illustrates another prior art split gate flash memory cell.

Figure 5 illustrates another prior art split gate flash memory cell.

FIG. 6 is a schematic diagram illustrating different stages of an exemplary artificial neural network utilizing one or more non-volatile memory arrays.

Fig. 7 is a block diagram illustrating a vector-matrix multiplication system.

FIG. 8 is a block diagram illustrating an exemplary artificial neural network utilizing one or more vector-matrix multiplication systems.

FIG. 9 illustrates another embodiment of a vector-matrix multiplication system.

FIG. 10 illustrates another embodiment of a vector-matrix multiplication system.

FIG. 11 illustrates another embodiment of a vector-matrix multiplication system.

FIG. 12 illustrates another embodiment of a vector-matrix multiplication system.

FIG. 13 illustrates another embodiment of a vector-matrix multiplication system.

FIG. 14 illustrates a prior art long and short term memory system.

FIG. 15 shows exemplary units used in a long and short term memory system.

Fig. 16 shows one embodiment of the exemplary unit of fig. 15.

Fig. 17 shows another embodiment of the exemplary unit of fig. 15.

Figure 18 shows a prior art gated recursive cell system.

FIG. 19 shows an exemplary cell used in a gated recursive cell system.

Fig. 20 shows one embodiment of the exemplary unit of fig. 19.

Fig. 21 shows another embodiment of the exemplary unit of fig. 19.

FIG. 22A illustrates an embodiment of a method of programming a non-volatile memory cell.

FIG. 22B illustrates another embodiment of a method of programming a non-volatile memory cell.

FIG. 23 illustrates one embodiment of a coarse programming method.

FIG. 24 shows exemplary pulses used in programming of a non-volatile memory cell.

FIG. 25 shows exemplary pulses used in programming of a non-volatile memory cell.

FIG. 26 shows a calibration algorithm for programming a non-volatile memory cell that adjusts the programming parameters based on the slope characteristics of the cell.

Fig. 27 shows a circuit used in the calibration algorithm of fig. 26.

FIG. 28 shows a calibration algorithm for programming non-volatile memory cells.

Fig. 29 shows a circuit used in the calibration algorithm of fig. 28.

FIG. 30 shows an exemplary progression of voltages applied to the control gates of non-volatile memory cells during a program operation.

FIG. 31 shows an exemplary progression of voltages applied to the control gates of non-volatile memory cells during a program operation.

FIG. 32 shows a system for applying programming voltages during programming of non-volatile memory cells within a vector-multiplication matrix system.

Fig. 33 shows a charge summer circuit.

Fig. 34 shows a current summer circuit.

Fig. 35 shows a digital summer circuit.

Fig. 36A illustrates one embodiment of an integral analog-to-digital converter for neuron output.

Fig. 36B shows a graph showing a change over time in the voltage output of the integrating analog-to-digital converter of fig. 36A.

Fig. 36C shows another embodiment of an integrating analog-to-digital converter for neuron output.

Fig. 36D shows a graph showing a change in voltage output with time of the integrating analog-to-digital converter of fig. 36C.

Fig. 36E shows another embodiment of an integrating analog-to-digital converter for neuron output.

Fig. 36F shows another embodiment of an integrating analog-to-digital converter for neuron output.

Fig. 37A and 37B show successive approximation type analog-to-digital converters of neuron outputs.

Fig. 38 illustrates one embodiment of a sigma-delta type analog-to-digital converter.

Detailed Description

The artificial neural network of the present invention utilizes a combination of CMOS technology and a non-volatile memory array.

Non-volatile memory cell

Digital non-volatile memories are well known. For example, U.S. patent 5,029,130 ("the' 130 patent"), which is incorporated herein by reference, discloses an array of split gate non-volatile memory cells, which are a type of flash memory cells. Such a memory cell 210 is shown in fig. 2. Each memory cell 210 includes a source region 14 and a drain region 16 formed in a semiconductor substrate 12 with a channel region 18 therebetween. A floating gate 20 is formed over and insulated from (and controls the conductivity of) a first portion of the channel region 18 and is formed over a portion of the source region 14. A word line terminal 22 (which is typically coupled to a word line) has a first portion disposed over and insulated from (and controls the conductivity of) a second portion of the channel region 18, and a second portion extending upward and over the floating gate 20. The floating gate 20 and the word line terminal 22 are insulated from the substrate 12 by gate oxide. Bit line 24 is coupled to drain region 16.

The memory cell 210 is erased (with electrons removed from the floating gate) by placing a high positive voltage on the word line terminal 22, which causes electrons on the floating gate 20 to tunnel through the intermediate insulator from the floating gate 20 to the word line terminal 22 via Fowler-Nordheim tunneling.

Memory cell 210 (in which electrons are placed on the floating gate) is programmed by placing a positive voltage on word line terminal 22 and a positive voltage on source region 14. Electron current will flow from the source region 14 to the drain region 16. When the electrons reach the gap between the word line terminal 22 and the floating gate 20, the electrons will accelerate and heat up. Some of the heated electrons will be injected onto the floating gate 20 through the gate oxide due to electrostatic attraction from the floating gate 20.

Memory cell 210 is read by placing a positive read voltage on drain region 16 and word line terminal 22 (which turns on the portion of channel region 18 under the word line terminal). If the floating gate 20 is positively charged (i.e., electrons are erased), the portion of the channel region 18 under the floating gate 20 is also turned on and current will flow through the channel region 18, which is sensed as an erased or "1" state. If the floating gate 20 is negatively charged (i.e., programmed by electrons), the portion of the channel region under the floating gate 20 is mostly or completely turned off and no (or little) current will flow through the channel region 18, which is sensed as a programmed or "0" state.

Table 1 shows typical voltage ranges that may be applied to the terminals of memory cell 110 for performing read, erase and program operations:

table 1: operation of flash memory cell 210 of FIG. 3

Other split gate memory cell configurations are known as other types of flash memory cells. For example, fig. 3 shows a four-gate memory cell 310 that includes a source region 14, a drain region 16, a floating gate 20 over a first portion of the channel region 18, a select gate 22 (typically coupled to a word line WL) over a second portion of the channel region 18, a control gate 28 over the floating gate 20, and an erase gate 30 over the source region 14. Such a configuration is described in U.S. patent 6,747,310, which is incorporated by reference herein for all purposes. Here, all gates, except the floating gate 20, are non-floating gates, which means that they are or can be electrically connected to a voltage source. Programming is performed by heated electrons from the channel region 18 which inject themselves into the floating gate 20. The erase is performed by electrons tunneling from the floating gate 20 to the erase gate 30.

Table 2 shows typical voltage ranges that may be applied to the terminals of memory cell 310 for performing read, erase and program operations:

table 2: operation of flash memory cell 310 of FIG. 3

WL/SG BL CG EG SL
Reading 1.0-2V 0.6-2V 0-2.6V 0-2.6V 0V
Erasing -0.5V/0V 0V 0V/-8V 8-12V 0V
Programming 1V 1μA 8-11V 4.5-9V 4.5-5V

Fig. 4 shows a tri-gate memory cell 410, which is another type of flash memory cell. Memory cell 410 is the same as memory cell 310 of fig. 3, except that memory cell 410 does not have a separate control gate. The erase operation (and thus the erase by using the erase gate) and the read operation are similar to those of fig. 3, except that no control gate bias is applied. Without control gate biasing, the programming operation is also completed and, as a result, a higher voltage must be applied on the source line during the programming operation to compensate for the lack of control gate biasing.

Table 3 shows typical voltage ranges that may be applied to the terminals of memory cell 410 for performing read, erase and program operations:

table 3: operation of flash memory cell 410 of FIG. 4

WL/SG BL EG SL
Reading 0.7-2.2V 0.6-2V 0-2.6V 0V
Erasing -0.5V/0V 0V 11.5V 0V
Programming 1V 2-3μA 4.5V 7-9V

Fig. 5 shows a stacked gate memory cell 510, which is another type of flash memory cell. Memory cell 510 is similar to memory cell 210 of fig. 2, except that floating gate 20 extends over the entire channel region 18, and control gate 22 (which here will be coupled to a word line) extends over floating gate 20, separated by an insulating layer (not shown). The erase, program, and read operations operate in a similar manner as previously described for memory cell 210.

Table 4 shows typical voltage ranges that may be applied to the terminals of the memory cell 510 and the substrate 12 for performing read, erase and program operations:

table 4: operation of flash memory cell 510 of FIG. 5

CG BL SL Substrate
Reading 2-5V 0.6–2V 0V 0V
Erasing -8 to-10V/0V FLT FLT 8-10V/15-20V
Programming 8-12V 3-5V 0V 0V

In order to utilize a memory array comprising one of the above types of non-volatile memory cells in an artificial neural network, two modifications are made. First, the circuitry is configured so that each memory cell can be programmed, erased, and read individually without adversely affecting the memory state of other memory cells in the array, as explained further below. Second, continuous (analog) programming of the memory cells is provided.

In particular, the memory state (i.e., the charge on the floating gate) of each memory cell in the array can be continuously changed from a fully erased state to a fully programmed state independently and with minimal disturbance to other memory cells. In another embodiment, the memory state (i.e., the charge on the floating gate) of each memory cell in the array can be continuously changed from a fully programmed state to a fully erased state, or vice versa, independently and with minimal disturbance to other memory cells. This means that the cell storage device is analog, or at least can store one of many discrete values (such as 16 or 64 different values), which allows very precise and individual tuning of all cells in the memory array, and which makes the memory array ideal for storing and fine-tuning synaptic weights for neural networks.

Neural network employing non-volatile memory cell array

Figure 6 conceptually illustrates a non-limiting example of a neural network that utilizes a non-volatile memory array of the present embodiment. This example uses a non-volatile memory array neural network for facial recognition applications, but any other suitable application may also be implemented using a non-volatile memory array based neural network.

For this example, S0 is the input layer, which is a 32 × 32 pixel RGB image with 5-bit precision (i.e., three 32 × 32 pixel arrays, one for each color R, G and B, respectively, each pixel being 5-bit precision). Synapse CB1 from input layer S0 to layer C1 applies different sets of weights in some cases, shared weights in other cases, and scans the input image with a3 x 3 pixel overlap filter (kernel), shifting the filter by 1 pixel (or more than 1 pixel as indicated by the model). Specifically, the values of 9 pixels in the 3 × 3 portion of the image (i.e., referred to as a filter or kernel) are provided to synapse CB1, where these 9 input values are multiplied by appropriate weights, and after summing the outputs of this multiplication, a single output value is determined by the first synapse of CB1 and provided for generating the pixels of one of the layers C1 of the feature map. The 3 x 3 filter is then shifted to the right by one pixel (i.e., adding the three pixel column to the right and releasing the three pixel column to the left) within the input layer S0, thereby providing the 9 pixel values in the newly positioned filter to synapse CB1 where they are multiplied by the same weight and a second single output value is determined by the associated synapse. This process continues until the 3 x 3 filter scans all three colors and all bits (precision values) over the entire 32 x 32 pixel image of the input layer S0. This process is then repeated using different sets of weights to generate different feature maps for C1 until all feature maps for layer C1 are computed.

At level C1, in this example, there are 16 feature maps, each having 30 × 30 pixels. Each pixel is a new feature pixel extracted from the product of the input and the kernel, so each feature map is a two-dimensional array, so in this example, layer C1 consists of a 16-layer two-dimensional array (bearing in mind that the layers and arrays referred to herein are logical relationships, not necessarily physical relationships, i.e., the arrays are not necessarily oriented in a physical two-dimensional array). Each of the 16 feature maps in level C1 is generated by one of sixteen different sets of synaptic weights applied to the filter scan. The C1 feature maps may all relate to different aspects of the same image feature, such as boundary identification. For example, a first map (generated using a first set of weights, shared for all scans used to generate the first map) may identify rounded edges, a second map (generated using a second set of weights different from the first set of weights) may identify rectangular edges, or aspect ratios of certain features, and so on.

Before going from layer C1 to layer S1, an activation function P1 (pooling) is applied that pools values from consecutive non-overlapping 2 x2 regions in each feature map. The purpose of the pooling function is to average neighboring locations (or a max function may also be used) to e.g. reduce the dependency of edge locations and reduce the data size before entering the next stage. At level S1, there are 16 15 × 15 feature maps (i.e., sixteen different arrays of 15 × 15 pixels each). Synapse CB2 from layer S1 to layer C2 scans the mapping in S1 with a 4 × 4 filter, where the filter is shifted by 1 pixel. At level C2, there are 2212 × 12 feature maps. Before going from layer C2 to layer S2, an activation function P2 (pooling) is applied that pools values from consecutive non-overlapping 2 x2 regions in each feature map. At level S2, there are 22 6 × 6 feature maps. An activation function (pooling) is applied to synapse CB3 from tier S2 to tier C3, where each neuron in tier C3 is mapped to each of tiers S2 via a respective synapse connection of CB 3. At layer C3, there are 64 neurons. Synapse CB4 from tier C3 to output tier S3 fully connects C3 to S3, i.e., each neuron in tier C3 is connected to each neuron in tier S3. The output at S3 includes 10 neurons, with the highest output neuron determining the class. For example, the output may indicate an identification or classification of the content of the original image.

The synapses of each layer are implemented using an array or a portion of an array of non-volatile memory cells.

FIG. 7 is a block diagram of an array that can be used for this purpose. The vector-matrix multiplication (VMM) array 32 includes non-volatile memory cells and serves as synapses between one layer and the next (such as CB1, CB2, CB3, and CB4 in FIG. 6). In particular, the VMM array 32 includes a non-volatile memory cell array 33, erase and word line gate decoders 34, control gate decoders 35, bit line decoders 36 and source line decoders 37 that decode respective inputs to the non-volatile memory cell array 33. Inputs to the VMM array 32 may come from the erase gate and word line gate decoders 34 or from the control gate decoder 35. In this example, the source line decoder 37 also decodes the output of the nonvolatile memory cell array 33. Alternatively, the bit line decoder 36 may decode the output of the nonvolatile memory cell array 33.

The non-volatile memory cell array 33 serves two purposes. First, it stores the weights to be used by the VMM array 32. Second, the non-volatile memory cell array 33 effectively multiplies the inputs by the weights stored in the non-volatile memory cell array 33 and adds them per output line (source line or bit line) to produce an output that will be the input of the next layer or the input of the final layer. By performing multiply and add functions, the non-volatile memory cell array 33 eliminates the need for separate multiply and add logic circuits, and is also power efficient due to its in-situ memory computation.

The output of the array of non-volatile memory cells 33 is provided to a differential summer (such as a summing operational amplifier or a summing current mirror) 38, which sums the output of the array of non-volatile memory cells 33 to create a single value for the convolution. The differential summer 38 is arranged for performing a summation of the positive and negative weights.

The output values of the differential summers 38 are then summed and provided to an activation function circuit 39, which modifies the output. The activation function circuit 39 may provide a sigmoid, tanh, or ReLU function. The modified output values of the activation function circuit 39 become elements of the feature map as a next layer (e.g., layer C1 in fig. 6) and are then applied to the next synapse to produce a next feature map layer or final layer. Thus, in this example, the non-volatile memory cell array 33 constitutes a plurality of synapses (which receive their inputs from an existing neuron layer or from an input layer such as an image database), and the summing op-amp 38 and the activation function circuit 39 constitute a plurality of neurons.

The inputs to the VMM array 32 in fig. 7 (WLx, EGx, CGx, and optionally BLx and SLx) may be analog levels, binary levels, or digital bits (in which case, a DAC is provided to convert digital bits to the appropriate input analog levels), and the outputs may be analog levels, binary levels, or digital bits (in which case, an output ADC is provided to convert output analog levels to digital bits).

Fig. 8 is a block diagram illustrating the use of a multi-layer VMM array 32 (labeled here as VMM arrays 32a, 32b, 32c, 32d, and 32 e). As shown in fig. 8, the input (denoted as Inputx) is converted from digital to analog by a digital-to-analog converter 31 and provided to an input VMM array 32 a. The converted analog input may be a voltage or a current. The input D/a conversion of the first layer may be done by using a LUT (look-up table) or a function mapping the input Inputx to the appropriate analog levels of the matrix multipliers of the input VMM array 32 a. Input conversion may also be accomplished by an analog-to-analog (A/A) converter to convert external analog input to mapped analog input to the input VMM array 32 a.

The output produced by the input VMM array 32a is provided as input to the next VMM array (hidden level 1)32b, which in turn generates output provided as input to the next VMM array (hidden level 2)32c, and so on. Each layer of the VMM array 32 serves as a distinct layer of synapses and neurons for a Convolutional Neural Network (CNN). Each VMM array 32a, 32b, 32c, 32d, and 32e may be a separate physical non-volatile memory array, or multiple VMM arrays may utilize different portions of the same non-volatile memory array, or multiple VMM arrays may utilize overlapping portions of the same physical non-volatile memory array. The example shown in fig. 8 comprises five layers (32a, 32b, 32c, 32d, 32 e): one input layer (32a), two hidden layers (32b, 32c) and two fully connected layers (32d, 32 e). Those of ordinary skill in the art will appreciate that this is merely exemplary and that, instead, a system may include more than two hidden layers and more than two fully connected layers.

Vector-matrix multiplication (VMM) array

FIG. 9 illustrates a neuron VMM array 900 that is particularly suited for use with the memory cell 310 shown in FIG. 3 and serves as a synapse and component for neurons between an input layer and a next layer. The VMM array 900 includes a memory array 901 of non-volatile memory cells and a reference array 902 of non-volatile reference memory cells (at the top of the array). Alternatively, another reference array may be placed at the bottom.

In the VMM array 900, control gate lines (such as control gate line 903) extend in a vertical direction (so the reference array 902 is orthogonal to the control gate line 903 in the row direction), and erase gate lines (such as erase gate line 904) extend in a horizontal direction. Here, the inputs of the VMM array 900 are placed on control gate lines (CG0, CG1, CG2, CG3), and the outputs of the VMM array 900 appear on source lines (SL0, SL 1). In one embodiment, only even rows are used, and in another embodiment, only odd rows are used. The current placed on each source line (SL0, SL1, respectively) performs a summation function of all currents from the memory cells connected to that particular source line.

As described herein for neural networks, the non-volatile memory cells of VMM array 900 (i.e., the flash memory of VMM array 900) are preferably configured to operate in a sub-threshold region.

Biasing the non-volatile reference memory cell and the non-volatile memory cell described herein in weak inversion:

Ids=Io*e(Vg-Vth)/kVt=w*Io*e(Vg)/kVt

wherein w ═ e(-Vth)/kVt

For an I-to-V logarithmic converter that converts an input current to an input voltage using a memory cell (such as a reference memory cell or a peripheral memory cell) or transistor:

Vg=k*Vt*log[Ids/wp*Io]

here, wp is w of the reference memory cell or the peripheral memory cell.

For a memory array used as a vector matrix multiplier VMM array, the output current is:

Iout=wa*Io*e(Vg)/kVti.e. by

Iout=(wa/wp)*Iin=W*Iin

W=e(Vthp-Vtha)/kVt

Here, wa ═ w for each memory cell in the memory array.

The word line or control gate may be used as an input to the memory cell for an input voltage.

Alternatively, the flash memory cells of the VMM array described herein may be configured to operate in the linear region:

Ids=β*(Vgs-Vth)*Vds;β=u*Cox*W/L

W=α(Vgs-Vth)

a word line or control gate or bit line or source line may be used as an input to a memory cell operating in the linear region.

For an I-V linear converter, a memory cell (e.g., a reference memory cell or a peripheral memory cell) or transistor operating in the linear region may be used to linearly convert an input/output current to an input/output voltage.

Other embodiments of VMM array 32 of fig. 7 are described in U.S. patent application No. 15/826,345, which is incorporated herein by reference. As described herein, a source line or bit line may be used as the neuron output (current summation output).

FIG. 10 illustrates a neuron VMM array 1000 that is particularly suited for use in the memory cell 210 shown in FIG. 2 and serves as a synapse between an input layer and a next layer. The VMM array 1000 includes a memory array of non-volatile memory cells 1003, a reference array of first non-volatile reference memory cells 1001, and a reference array of second non-volatile reference memory cells 1002. The reference arrays 1001 and 1002 arranged in the column direction of the array are used to convert the current inputs flowing into the terminals BLR0, BLR1, BLR2 and BLR3 into voltage inputs WL0, WL1, WL2 and WL 3. In practice, the first and second non-volatile reference memory cells are diode-connected through a multiplexer 1014 (only partially shown) into which a current input flows. The reference cell is tuned (e.g., programmed) to a target reference level. The target reference level is provided by a reference microarray matrix (not shown).

The memory array 1003 serves two purposes. First, it stores the weights that VMM array 1000 will use on its corresponding memory cells. Second, memory array 1003 effectively multiplies the inputs (i.e., the current inputs provided in terminals BLR0, BLR1, BLR2, and BLR3, which are converted to input voltages by reference arrays 1001 and 1002 to be provided to word lines WL0, WL1, WL2, and WL3) by the weights stored in memory array 1003, and then adds all the results (memory cell currents) to produce an output on the corresponding bit line (BL0-BLN) that will be the input of the next layer or the input of the final layer. By performing the multiply and add functions, memory array 1003 eliminates the need for separate multiply and add logic circuits and is also power efficient. Here, voltage inputs are provided on the word lines (WL0, WL1, WL2, and WL3), and outputs appear on the respective bit lines (BL0-BLN) during a read (infer) operation. The current placed on each of the bit lines BL0-BLN performs a summation function of the currents from all the non-volatile memory cells connected to that particular bit line.

Table 5 shows the operating voltages for the VMM array 1000. The columns in the table indicate the voltages placed on the word line for the selected cell, the word lines for the unselected cells, the bit line for the selected cell, the bit lines for the unselected cells, the source line for the selected cell, and the source line for the unselected cells. The rows indicate read, erase, and program operations.

Table 5: operation of the VMM array 1000 of FIG. 10

WL WL-unselected BL BL-unselected SL SL-unselected
Reading 1-3.5V -0.5V/0V 0.6-2V(Ineuron) 0.6V-2V/0V 0V 0V
Erasing About 5-13V 0V 0V 0V 0V 0V
Programming 1-2V -0.5V/0V 0.1-3uA Vinh about 2.5V 4-10V 0-1V/FLT

FIG. 11 illustrates a neuron VMM array 1100, which is particularly suited for use in the memory cell 210 shown in FIG. 2, and serves as a synapse and component for neurons between an input layer and a next layer. The VMM array 1100 includes a memory array 1103 of non-volatile memory cells, a reference array 1101 of first non-volatile reference memory cells, and a reference array 1102 of second non-volatile reference memory cells. The reference arrays 1101 and 1102 extend in the row direction of the VMM array 1100. The VMM array is similar to VMM1000 except that in VMM array 1100, the word lines extend in a vertical direction. Here, inputs are provided on word lines (WLA0, WLB0, WLA1, WLB2, WLA2, WLB2, WLA3, WLB3), and outputs appear on source lines (SL0, SL1) during a read operation. The current placed on each source line performs a summation function of all currents from the memory cells connected to that particular source line.

Table 6 shows the operating voltages for the VMM array 1100. The columns in the table indicate the voltages placed on the word line for the selected cell, the word lines for the unselected cells, the bit line for the selected cell, the bit lines for the unselected cells, the source line for the selected cell, and the source line for the unselected cells. The rows indicate read, erase, and program operations.

Table 6: operation of the VMM array 1100 of FIG. 11

WL WL-unselected BL BL-unselected SL SL-unselected
Reading 1-3.5V -0.5V/0V 0.6-2V 0.6V-2V/0V About 0.3-1V (Ineuron) 0V
Erasing About 5-13V 0V 0V 0V 0V SL-forbidden (about 4-8V)
Programming 1-2V -0.5V/0V 0.1-3uA Vinh about 2.5V 4-10V 0-1V/FLT

FIG. 12 illustrates a neuron VMM array 1200, which is particularly suited for use with the memory cell 310 shown in FIG. 3, and serves as a synapse and component for neurons between an input layer and a next layer. The VMM array 1200 includes a memory array 1203 of non-volatile memory cells, a reference array 1201 of first non-volatile reference memory cells, and a reference array 1202 of second non-volatile reference memory cells. The reference arrays 1201 and 1202 are used to convert the current inputs flowing into the terminals BLR0, BLR1, BLR2, and BLR3 into voltage inputs CG0, CG1, CG2, and CG 3. In practice, the first and second non-volatile reference memory cells are diode connected through a multiplexer 1212 (only partially shown) with current inputs flowing therein through BLR0, BLR1, BLR2, and BLR 3. The multiplexers 1212 each include a respective multiplexer 1205 and cascode transistor 1204 to ensure a constant voltage on the bit line (such as BLR0) of each of the first and second non-volatile reference memory cells during a read operation. The reference cell is tuned to a target reference level.

The memory array 1203 serves two purposes. First, it stores the weights to be used by the VMM array 1200. Second, memory array 1203 effectively multiplies the inputs (current inputs provided to terminals BLR0, BLR1, BLR2, and BLR3, which reference arrays 1201 and 1202 convert to input voltages provided to control gates CG0, CG1, CG2, and CG3) by the weights stored in the memory array, and then adds all the results (cell currents) to produce an output that appears at BL0-BLN and will be the input of the next layer or the input of the final layer. By performing the multiply and add functions, the memory array eliminates the need for separate multiply and add logic circuits and is also power efficient. Here, inputs are provided on control gate lines (CG0, CG1, CG2, and CG3), and outputs appear on bit lines (BL0-BLN) during read operations. The currents placed on each bit line perform a summation function of all currents from the memory cells connected to that particular bit line.

The VMM array 1200 implements unidirectional tuning for the non-volatile memory cells in the memory array 1203. That is, each non-volatile memory cell is erased and then partially programmed until the desired charge on the floating gate is reached. This may be performed, for example, using a novel precision programming technique described below. If too much charge is placed on the floating gate (so that the wrong value is stored in the cell), the cell must be erased and the sequence of partial program operations must be restarted. As shown, two rows sharing the same erase gate (such as EG0 or EG1) need to be erased together (which is referred to as page erase) and, thereafter, each cell is partially programmed until the desired charge on the floating gate is reached.

Table 7 shows the operating voltages for the VMM array 1200. The columns in the table indicate the voltages placed on the word line for the selected cell, the word lines for the unselected cells, the bit lines for the selected cell, the bit lines for the unselected cells, the control gate for the selected cell, the control gate for the unselected cells in the same sector as the selected cell, the control gate for the unselected cells in a different sector than the selected cell, the erase gate for the unselected cells, the source line for the selected cell, the source line for the unselected cells. The rows indicate read, erase, and program operations.

Table 7: operation of the VMM array 1200 of FIG. 12

FIG. 13 illustrates a neuron VMM array 1300, which is particularly suited for use with the memory cell 310 shown in FIG. 3, and serves as a synapse and component for neurons between an input layer and a next layer. The VMM array 1300 includes a memory array 1303 of non-volatile memory cells, a reference array 1301 of first non-volatile reference memory cells, and a reference array 1302 of second non-volatile reference memory cells. EG lines EGR0, EG0, EG1 and EGR1 extend vertically, while CG lines CG0, CG1, CG2 and CG3 and SL lines WL0, WL1, WL2 and WL3 extend horizontally. VMM array 1300 is similar to VMM array 1400 except that VMM array 1300 implements bidirectional tuning where each individual cell can be fully erased, partially programmed, and partially erased as needed to achieve the desired amount of charge on the floating gate due to the use of a separate EG line. As shown, reference arrays 1301 and 1302 convert input currents in terminals BLR0, BLR1, BLR2, and BLR3 into control gate voltages CG0, CG1, CG2, and CG3 (by action of the diode-connected reference cells via multiplexer 1314) to be applied to the memory cells in the row direction. The current outputs (neurons) are in bit lines BL0-BLN, where each bit line sums all currents from non-volatile memory cells connected to that particular bit line.

Table 8 shows the operating voltages for the VMM array 1300. The columns in the table indicate the voltages placed on the word line for the selected cell, the word lines for the unselected cells, the bit lines for the selected cell, the bit lines for the unselected cells, the control gate for the selected cell, the control gate for the unselected cells in the same sector as the selected cell, the control gate for the unselected cells in a different sector than the selected cell, the erase gate for the unselected cells, the source line for the selected cell, the source line for the unselected cells. The rows indicate read, erase, and program operations.

Table 8: operation of the VMM array 1300 of FIG. 13

Long and short term memory

The prior art includes a concept known as Long Short Term Memory (LSTM). LSTM units are commonly used in neural networks. LSTM allows the neural network to remember information at predetermined arbitrary time intervals and use the information in subsequent operations. Conventional LSTM cells include a cell, an input gate, an output gate, and a forgetting gate. Three gates regulate the flow of information into and out of the cell and the time interval in which the information is remembered in the LSTM. The VMM may be used in particular in LSTM cells.

Fig. 14 shows an exemplary LSTM 1400. In this example, LSTM1400 includes cells 1401, 1402, 1403, and 1404. Unit 1401 receives an input vector x0And generates an output vector h0Sum cell state vector c0. Unit 1402 receives input vector x1Output vector (hidden state) h from unit 14010And cell state c from cell 14010And generates an output vector h1Sum cell state vector c1. Unit 1403 receives input vector x2Output vector (hidden state) h from unit 14021And cell state c from cell 14021And generates an output vector h2Sum cell state vector c2. Unit 1404 receives an input vector x3Output vector (hidden state) h from unit 14032And cell state c from cell 14032And generates an output vector h3. Additional cells may be used, and an LSTM with four cells is merely an example.

Fig. 15 shows an exemplary implementation of LSTM unit 1500 that may be used for units 1401, 1402, 1403, and 1404 in fig. 14. LSTM unit 1500 receives input vector x (t), unit state vector c (t-1) from the previous unit, and output vector h (t-1) from the previous unit, and generates unit state vector c (t) and output vector h (t).

LSTM unit 1500 includes sigmoid function devices 1501, 1502, and 1503, each applying a number between 0 and 1 to control how much of each component in the input vector is allowed to pass through to the output vector. The LSTM unit 1500 further comprises tanh devices 1504 and 1505 for applying a hyperbolic tangent function to the input vectors, multiplier devices 1506, 1507 and 1508 for multiplying the two vectors together, and an adding device 1509 for adding the two vectors together. The output vector h (t) may be provided to the next LSTM unit in the system, or it may be accessed for other purposes.

Fig. 16 shows an LSTM unit 1600, which is an example of a specific implementation of LSTM unit 1500. For the convenience of the reader, the same numbering is used in LSTM unit 1600 as LSTM unit 1500. The Sigmoid function devices 1501, 1502, and 1503 and the tanh device 1504 each include a plurality of VMM arrays 1601 and active circuit block regions 1602. Thus, it can be seen that the VMM array is particularly useful in LSTM cells used in some neural network systems.

An alternative form of LSTM unit 1600 (and another example of a specific implementation of LSTM unit 1500) is shown in fig. 17. In fig. 17, sigmoid function devices 1501, 1502, and 1503 and tanh device 1504 share the same physical hardware (VMM array 1701 and activate function block 1702) in a time-division multiplexed manner. LSTM unit 1700 further includes multiplier device 1703 to multiply the two vectors together, adding device 1708 to add the two vectors together, tanh device 1505 (which includes activation circuit block 1702), register 1707 to store value i (t) when value i (t) is output from sigmoid function block 1702, register 1704 to store value f (t) c (t-1) when value f (t-1) is output from multiplier device 1703 through multiplexer 1710, register 1705 to store value i (t) u (t) when value i (t) u (t) is output from multiplier device 1703 through multiplexer 1710, register 1706 to store value o (t) c (t) when values o (t) c (t) are output from multiplier device 1703 through multiplexer 1710, and multiplexer 1709.

LSTM unit 1600 includes multiple sets of VMM arrays 1601 and corresponding activation function blocks 1602, while LSTM unit 1700 includes only one set of VMM arrays 1701 and activation function blocks 1702, which are used to represent the multiple layers in an embodiment of LSTM unit 1700. LSTM unit 1700 will require less space than LSTM1600 because LSTM unit 1700 only requires its 1/4 space for the VMM and the active function tiles as compared to LSTM unit 1600.

It will also be appreciated that an LSTM unit will typically include multiple VMM arrays, each requiring functionality provided by certain circuit blocks outside the VMM array, such as the summer and activation circuit blocks and the high voltage generation block. Providing separate circuit blocks for each VMM array would require a large amount of space within the semiconductor device and would be somewhat inefficient. Thus, the embodiments described below attempt to minimize the circuitry required beyond the VMM array itself.

Gated recursion unit

Emulated VMM implementations may be used to Gate Recursive Unit (GRU) systems. GRUs are gating mechanisms in recurrent neural networks. A GRU is similar to an LSTM, except that a GRU unit generally contains fewer components than an LSTM unit.

Fig. 18 shows an exemplary GRU 1800. The GRU1800 in this example includes cells 1801, 1802, 1803 and 1804. Unit 1801 receives an input vector x0And generates an output vector h0. Unit 1802 receives an input vector x1Output vector h from unit 18010And generates an output vector h1. Unit 1803 receives input vector x2And the output vector (hidden state) h from unit 18021And generates an output vector h2. Unit 1804 receives input vector x3And output vector (hidden state) h from unit 18032And generates an output vector h3. Additional cells may be used and a GRU with four cells is merely an example.

Fig. 19 shows an exemplary implementation of a GRU unit 1900 that may be used for the units 1801, 1802, 1803 and 1804 of fig. 18. GRU unit 1900 receives an input vector x (t) and an output vector h (t-1) from a previous GRU unit and generates an output vector h (t). GRU unit 1900 includes sigmoid function devices 1901 and 1902, each applying a number between 0 and 1 to the components from output vector h (t-1) and input vector x (t). The GRU unit 1900 further includes a tanh device 1903 for applying a hyperbolic tangent function to an input vector, a plurality of multiplier devices 1904, 1905 and 1906 for multiplying two vectors together, an adding device 1907 for adding the two vectors together, and a complementary device 1908 for subtracting the input from 1 to generate an output.

Fig. 20 shows a GRU unit 2000, which is an example of a specific implementation of a GRU unit 1900. For the reader's convenience, the same numbering is used in the GRU unit 2000 as in the GRU unit 1900. As can be seen in fig. 20, sigmoid function devices 1901 and 1902 and tanh device 1903 each include multiple VMM arrays 2001 and an activation function block 2002. Thus, it can be seen that VMM arrays are particularly useful in GRU units used in certain neural network systems.

An alternative form of the GRU unit 2000 (and another example of a specific implementation of the GRU unit 1900) is shown in fig. 21. In fig. 21, the GRU unit 2100 utilizes the VMM array 2101 and an activation function block 2102, the activation function block 2102, when configured as a sigmoid function, applying a number between 0 and 1 to control how much of each component in the input vector is allowed to pass through to the output vector. In fig. 21, sigmoid function devices 1901 and 1902 and tanh device 1903 share the same physical hardware (VMM array 2101 and active function block 2102) in a time-division multiplexing manner. The GRU unit 2100 further comprises a multiplier device 2103 to multiply the two vectors together, an adding device 2105 to add the two vectors together, a complementary device 2109 to subtract the input from 1 to generate an output, a multiplexer 2104, a register 2106 to hold the value h (t-1) × r (t) when the value h (t-1) × r (t) is output from the multiplier device 2103 through the multiplexer 2104, a register 2107 to hold the value h (t-1) × z (t) when the value h (t-1) × z (t) is output from the multiplier device 2103 through the multiplexer 2104, and a register 2108 to hold the value h ^ (t) ^ (1-z) (t) when the value h ^ (t) ^ (1-z) ((t)) is output from the multiplier device 2103 through the multiplexer.

The GRU unit 2000 contains multiple sets of VMM arrays 2001 and activation function blocks 2002, while the GRU unit 2100 contains only one set of VMM arrays 2101 and activation function blocks 2102, which are used to represent the multiple layers in an embodiment of the GRU unit 2100. GRU unit 2100 would require less space than GRU unit 2000 because, compared to GRU unit 2000, GRU unit 2100 only requires its 1/3 space for the VMM and the activation function blocks.

It will also be appreciated that a GRU system will typically include multiple VMM arrays, each requiring functionality provided by certain circuit blocks outside the VMM array, such as the summer and activation circuit blocks and the high voltage generation block. Providing separate circuit blocks for each VMM array would require a large amount of space within the semiconductor device and would be somewhat inefficient. Thus, the embodiments described below attempt to minimize the circuitry required beyond the VMM array itself.

The input to the VMM array may be an analog level, a binary level, or a digital bit (in which case a DAC is required to convert the digital bit to the appropriate input analog level), and the output may be an analog level, a binary level, or a digital bit (in which case an output ADC is required to convert the output analog level to a digital bit).

For each memory cell in the VMM array, each weight w may be implemented by a single memory cell or by a differential cell or by two hybrid memory cells (average of 2 cells). In the case of a differential cell, two memory cells are needed to implement the weight w as the differential weight (w ═ w + -w-). In two hybrid memory cells, two memory cells are needed to implement the weight w as an average of the two cells.

Implementation for precise programming of cells in a VMM

Fig. 22A shows a programming method 2200. First, the method begins (step 2201), which typically occurs in response to receiving a program command. Next, a bulk program operation programs all cells to the "0" state (step 2202). The soft erase operation then erases all cells to an intermediate weak erase level, such that each cell will consume approximately 3 μ A-5 μ A of current during the read operation (step 2203). This is in contrast to the deep erase level, where each cell will consume approximately 20-30 μ A of current during a read operation. Then, hard programming is performed on all unselected cells to a very deep programmed state to add electrons to the floating gates of the cells (step 2204) to ensure that those cells are truly "off, meaning that those cells will consume a negligible amount of current during a read operation.

The coarse programming method is then performed on the selected cells (step 2205), followed by the fine programming method performed on the selected cells (step 2206) to program the precise values required for each selected cell.

FIG. 22B shows another programming method 2210 similar to programming method 2200. However, instead of a program operation that programs all cells to the "0" state as in step 2202 of FIG. 22A, after the method begins (step 2201), all cells are erased to the "1" state using an erase operation (step 2212). All cells are then programmed to an intermediate state (level) using a soft programming operation (step 2213), such that each cell will consume approximately 3-5 uA of current during a read operation. Thereafter, the coarse and fine programming method will be as shown in FIG. 22A. A variation of the implementation of fig. 22B would remove the soft programming method altogether (step 2213).

FIG. 23 illustrates a first embodiment of a coarse programming method 2205, which is a search and execution method 2300. First, a lookup table search is performed to determine a coarse target current value (I) for a selected cell based on a value intended to be stored in the selected cellCT) (step 2301). Assume that the selected cell can be programmed to store one of N possible values (e.g., 128, 64, 32, etc.). Each of the N values will correspond to a different desired current value (I) consumed by the selected cell during a read operationD). In one implementation, the lookup table may include M possible current values to use as the coarse target current value I for the selected cell during search and execution of method 2300CTWherein M is an integer less than N. For example, if N is 8, then M can be 4, meaning that there are 8 possible values that the selected cell can store, and one of the 4 coarse target current values will be selected for searching and executing the coarse target of method 2300. That is, search and execute method 2300 (which is also one embodiment of coarse programming method 2205) is intended to quickly program a selected cell to some extent close to a desired value (I)D) Value of (I)CT) Then, precision programming method 2206 aims to more accurately program the selected cell to be very close to the desired value (I)D)。

For the simple example of N-8 and M-4, examples of cell values, desired current values, and coarse target current values are shown in tables 9 and 10:

table 9: example of N desired current values when N is 8

The value stored in the selected cell Desired current value (I)D)
000 100pA
001 200pA
010 300pA
011 400pA
100 500pA
101 600pA
110 700pA
111 800pA

Table 10: example of M target current values when M is 4

Coarse target Current value (I)CT) Associated cell value
200pA+ICTOFFSET1 000,001
400pA+ICTOFFSET2 010,011
600pA+ICTOFFSET3 100,101
800pA+ICTOFFSET4 110,111

Offset value ICTOFFSETxFor preventing the desired current value from being exceeded during coarse tuning.

Once the coarse target amperage I is selectedCTThe voltage v is applied by applying a voltage v based on the cell architecture type of the selected cell (e.g., memory cell 210, 310, 410, or 510)0Applied to the appropriate terminal of the selected cell toThe cell is selected for programming (step 2302). If the selected cell is a memory cell 310 of the type in FIG. 3, the voltage v0Will be applied to the control gate terminal 28 and according to the coarse target current value ICT,v0May be 5V-7V. V0Optionally can be stored by v0Voltage lookup table and coarse target current value ICTAnd (4) determining.

Next, by applying a voltage vi=vi-1+vincrementProgramming the selected cell, where i starts at 1 and increments each time the step is repeated, and vincrementTo a small voltage that will result in a programming level appropriate for the desired varying granularity (step 2303). Thus, a first time step 2303 is performed, i equals 1, and v is1Will be v0+vincrement. A verify operation is then performed (step 2304) in which a read operation is performed on the selected cell and the current (I) consumed by the selected cell is measuredcell). If IcellIs less than or equal to ICT(here, the first threshold), the search and execution method 2300 is complete and the precision programming method 2206 may begin. If IcellNot less than or equal to ICTThen step 2303 is repeated and i is incremented.

Thus, at the time that coarse programming method 2205 ends and precise programming method 2206 begins, voltage viWill be the final voltage used to program the selected cell, and the selected cell will store the coarse target current value ICTThe associated value. The purpose of the precision programming method 2206 is to program the selected cell to its consumption current I during a read operationD(plus or minus an acceptable amount of deviation, such as 50pA or less), the current IDIs the desired current value associated with the value intended to be stored in the selected cell.

FIG. 24 shows an example of different voltage progressions that may be applied to the control gates of selected memory cells during a precision programming method 2206.

Under a first approach, progressively increasing voltages are applied to the control gates to further increaseThe selected memory cell is programmed in one step. Starting point is viWhich is the final voltage applied during the coarse programming method 2205. Increment vp1Is added to v1Then using the voltage v1+vp1The selected cell (indicated by the second pulse from the left in the progression 2401) is programmed. v. ofp1Is less than vincrementIncrements of (voltage increments used during coarse programming method 2205). After each programming voltage is applied, a verification step is performed (similar to step 2304) in which it is determined whether Icell is less than or equal to IPT1(which is the first precise target current value and here the second threshold value), where IPT1=ID+IPT1OFFSETIn which IPT1OFFSETIs an offset value added to prevent program overshoot. If not, another increment v is addedp1Add to the previously applied programming voltage and repeat the process. In IcellIs less than or equal to IPT1Then that portion of the programming sequence stops. Optionally, if IPT1Is equal to IDOr approximately equal to I with sufficient accuracyDThen the selected memory cell has been successfully programmed.

If IPT1Not close enough to IDThen further programming at a smaller granularity may be performed. Here, progressive 2402 is now used. The starting point of the progression 2402 is the final voltage for programming under the progression 2401. Will increase by Vp2(which is less than v)p1) Added to the voltage and the combined voltage is applied to program the selected memory cell. After each program voltage is applied, a verify step is performed (similar to step 2304) in which I is determinedcellWhether or not it is less than or equal to IPT2(which is the second precise target current value and is here the third threshold value), where IPT2=ID+IPT2OFFSETIn which IPT2OFFSETIs an offset value added to prevent program overshoot. If not, another increment v is addedp2Add to the previously applied programming voltage and repeat the process. In IcellIs less than or equal to IPT2Then that portion of the programming sequence stops. Here, assume IPT2Is equal to IDOr sufficiently close to IDSo that programming can be stopped because the target value has been achieved with sufficient accuracy. One of ordinary skill in the art will appreciate that additional progressions may be applied by using smaller and smaller programmed increments. For example, in fig. 25, three progressive steps (2501, 2502, and 2503) are applied instead of only two.

The second method is shown in schedule 2403. Here, instead of increasing the voltage applied during programming of the selected memory cell, the same voltage is applied for the duration of the increase period. Instead of adding an incremental voltage such as v in progress 2401p1And add v in schedule 2403p2But an additional time increment tp1Added to the programming pulses such that each applied pulse is longer than the previously applied pulse by tp1. After each program pulse is applied, the same verify steps as previously described for the progression 2401 are performed. Optionally, where the additional time increment added to the programming pulse has a shorter duration than the previously used schedule, then additional progression may be applied. Although only one time progression is shown, one of ordinary skill in the art will appreciate that any number of different time progressions may be applied.

Additional details will now be provided for two additional implementations of coarse programming method 2205.

Fig. 26 illustrates a second embodiment of a coarse programming method 2205, which is an adaptive calibration method 2600. The method starts (step 2601). With a default starting value v0The cell is programmed (step 2602). Unlike in the search and execution method 2300, where v0Rather than from a look-up table, it may be a relatively small initial value. The control gate voltage of the cell is measured at a first current value IR1 (e.g., 100na) and a second current value IR2 (e.g., 10na), and a sub-threshold slope is determined based on these measurements (e.g., 360mV/dec) and stored (step 2603).

Determining a new desired voltage vi. The first time this step is performed, i is 1, and the subthreshold formula is used based on the stored subthreshold slope value and the current targetScalar and offset values to determine v1Such as the following:

Vi=Vi-1+Vincrement,

vincrement is proportional to the slope of Vg

Vg=k*Vt*log[Ids/wa*Io]

Here wa is w of the memory cell and Ids is the current target value plus an offset value.

If the stored slope value is relatively steep, a relatively small current offset value may be used. If the stored slope value is relatively flat, a relatively high current offset value may be used. Thus, determining slope information will allow selection of a current offset value tailored to the particular cell under consideration. This will eventually make the programming process shorter. When this step is repeated, i is incremented, and vi=vi-1+vincrement. Then use vi for programming. v. ofincrementCan be stored by vincrementA look-up table of values and target current values is determined.

Then, a verify operation is performed in which a read operation is performed on the selected cell and the current (I) consumed by the selected cell is measuredcell) (step 2605). If IcellIs less than or equal to ICT(here it is the coarse target threshold), where ICTIs set as ID+ICTOFFSETIn which ICTOFFSETIs an offset value added to prevent programming overshoot, adaptive calibration method 2600 is complete and precision programming method 2206 may begin. If IcellNot less than or equal to ICTThen steps 2604 through 2605 are repeated and i is incremented.

Fig. 27 shows aspects of an adaptive calibration method 2600. During step 2603, current source 2701 is used to apply exemplary current values IR1 and IR2 to the selected cell (here memory cell 2702) and then measure the voltage at the control gate of memory cell 2702 (CGR1 for IR1 and CGR2 for IR 2). The slope will be (CGR2-CGR 1)/dec.

FIG. 28 illustrates a second embodiment of a coarse programming method 2205, which is an absolute calibration method 2800. The method begins (step 2801). With a default starting value v0The cell is programmed (step 2802). The control gate voltage (VCGRx) of the cell is measured at current value Itarget and stored (step 2803). Determining a new desired voltage v based on the stored control gate voltage and the current target value and offset value Ioffset + Itarget1(step 2804). For example, the new desired voltage v1The calculation can be as follows: v. of1=v0+ (VCGBIAS-stored VCGR), where VCGBIAS is approximately equal to 1.5V, which is the default read control gate voltage at the maximum target current, and stored VCGR is the read control gate voltage measured at step 2803.

Then use viThe cell is programmed. When i is 1, the voltage v from step 2804 is used1. When i is>When 2, use voltage vi=vi-1+Vincrement。vincrementCan be stored from vincrementThe value is determined in a look-up table with the target current value. Then, a verify operation is performed in which a read operation is performed on the selected cell and the current (I) consumed by the selected cell is measuredcell) (step 2806). If IcellIs less than or equal to ICT(which here is a threshold), then absolute calibration method 2800 is complete and precision programming method 2206 may begin. If IcellNot less than or equal to ICTThen steps 2805 through 2806 are repeated and i is incremented.

Fig. 29 shows a circuit 2900 for implementing step 2803 of absolute calibration method 2800. A voltage source (not shown) generates VCGR, which starts from an initial voltage and ramps up. Here, n +1 different current sources 2901(2901-0, 2901-1, 2901-2,.., 2901-n) generate different currents IO0, IO1, IO2, … IOn of increasing magnitude. Each current source 2901 is connected to an inverter 2902(2902-0, 2902-1, 2902-2,... gtoren, 2902-n) and a memory cell 2903(2903-0, 2903-1, 2903-2,. gtoren 2903-n). When VCGR is ramped up, each memory cell 2903 consumes an increasing amount of current, and the input voltage to each inverter 2902 decreases. Since IO0< IO1< IO2< > IOn < IOn, the output of inverter 2902-0 will switch from low to high first as VCGR increases. The output of inverter 2902-1 will then switch from low to high, then the output of inverter 2902-2 will switch, and so on until the output of inverter 2902-n switches from low to high. Each inverter 2902 controls a switch 2904(2904-0, 2904-1, 2904-2,..., 2904-n) such that when the output of the inverter 2902 is high, the switch 2904 is closed, which will cause VCGR to be sampled by a capacitor 2905(2905-0, 2905-1, 2905-2,..., 2905-n). Thus, switch 2904 and capacitor 2905 may form a sample-and-hold circuit. In the absolute calibration method 2800 of fig. 28, the values of IO0, IO1, IO2,.. or IOn are used as possible values for Itarget, and the corresponding sampled voltage is used as the associated value VCGRx. Graph 2906 shows that VCGR ramps up over time, and the outputs of inverters 2902-0, 2902-1, and 2902-n switch from low to high at different times.

FIG. 30 illustrates an exemplary progression 3000 for programming selected cells during an adaptive calibration method 2600 or an absolute calibration method 2800. In one embodiment, a voltage Vcgp is applied to the control gates of the memory cells of the selected row. The number of selected memory cells in the selected row is, for example, 32 cells. Thus, up to 32 memory cells in a selected row can be programmed in parallel. Each memory cell is allowed to be coupled to a programming current Iprog via a bit line enable signal. If the bit line enable signal is inactive (meaning a positive voltage is applied to the selected bit line), then the memory cell is inhibited (not programmed). As shown in FIG. 30, the bit line enable signal En _ blx (where x varies between 1 and n, where n is the number of bit lines) is allowed to have the required Vcgp voltage level for that bit line (and thus the selected memory on that bit line) at different times. In another embodiment, the voltage applied to the control gate of a selected cell may be controlled using an enable signal on the bit line. Each bit line enable signal causes the desired voltage (such as vi described in FIG. 28) corresponding to that bit line to be applied as Vcgp. The bit line enable signal may also control the programming current flowing into the bit line. In this example, each subsequent control gate voltage Vcgp is higher than the previous voltage. Alternatively, each subsequent control gate voltage may be lower or higher than the previous voltage. Each subsequent increment of Vcgp may or may not be equal to the previous increment.

FIG. 31 illustrates an exemplary progression 3100 for programming a selected cell during the adaptive calibration method 2600 or the absolute calibration method 2800. In one embodiment, the bit line enable signal enables a selected bit line (meaning a selected memory cell in the bit line) to be programmed with a corresponding VCGP voltage level. In another embodiment, a bit line enable signal may be used to control the voltage applied to the incrementally ramped control gate of a selected cell. Each bit line enable signal causes the desired voltage (such as vi described in fig. 28) corresponding to that bit line to be applied to the control gate voltage. In this example, each subsequent increment is equal to the previous increment.

FIG. 32 illustrates a system for implementing the input and output methods for reading or verifying with a VMM array. Input function circuitry 3201 receives digital bit values and converts these digital values to analog signals, which are in turn used to apply voltages to the control gates of selected cells in array 3204, which are determined by control gate decoder 3202. Meanwhile, the word line decoder 3203 is also used to select the row in which the selected cell is located. The output neuron circuit block 3205 performs an output action for each column (neuron) of cells in the array 3204. The output circuit block 3205 may be implemented using an integrating analog-to-digital converter (ADC), a successive approximation analog (SAR) ADC, or a sigma-delta ADC.

In one embodiment, for example, the digital values provided to input function circuit 3201 include four bits (DIN3, DIN2, DIN1, and DIN0), and the different bit values correspond to different numbers of input pulses applied to the control gates. A larger number of pulses will result in a larger output value (current) of the cell. Examples of bit values and pulse values are shown in table 11:

table 11: digital bit input and generated pulses

DIN3 DIN2 DIN1 DIN0 Generated pulse
0 0 0 0 0
0 0 0 1 1
0 0 1 0 2
0 0 1 1 3
0 1 0 0 4
0 1 0 1 5
0 1 1 0 6
0 1 1 1 7
1 0 0 0 8
1 0 0 1 9
1 0 1 0 10
1 0 1 1 11
1 1 0 0 12
1 1 0 1 13
1 1 1 0 14
1 1 1 1 15

In the above example, there are a maximum of 16 pulses for a 4-bit digital value for reading out the cell value. Each pulse is equal to one unit of cell value (current). For example, if Icell unit 1nA, then for DIN [3-0] ═ 0001, Icell 1 nA; and for DIN [3-0] ═ 1111, Icell ═ 15 × 1nA ═ 15 nA.

In another embodiment, the digital bit inputs use digital bit position summation to read the cell values, as shown in table 12. Here, only 4 pulses are required to evaluate the 4-bit digital value. For example, the first pulse is used to evaluate DIN0, the second pulse is used to evaluate DIN1, the third pulse is used to evaluate DIN2, and the fourth pulse is used to evaluate DIN 3. The results from the four pulses are then summed according to bit position. The digital bit summation implemented is as follows: the output is 2^0 ^ DIN0+2^1 ^ DIN1+2^ DIN2+2^3 ^ DIN3) × Icell units.

For example, if Icell unit 1nA, then for DIN [3-0] ═ 0001, Icell total 0+0+0+1 nA; and for DIN [3-0] ═ 1111, Icell total 8 × 1nA +4 × 1nA +2 × 1nA +1 × 1nA × 15 nA.

Table 12: digital bit input summation

Fig. 33 shows an example of a charge summer 3300 that may be used to sum the outputs of the VMMs during a verify operation to obtain a single analog value that represents the output and may then be optionally converted to a digital bit value. The charge summer 3300 includes a current source 3301 and a sample-and-hold circuit that includes a switch 3302 and a sample-and-hold (S/H) capacitor 3303. As shown for the example of a 4-bit digital value, there are 4S/H circuits to hold the values from the 4 evaluation pulses, where these values are added at the end of the process. The S/H capacitor 3303 is selected to have a ratio associated with the 2^ n + DINn bit positions of the S/H capacitor; for example, C _ DIN3 ═ x8 Cu, C _ DIN2 ═ x4 Cu, C _ DIN1 ═ x2 Cu, and DIN0 ═ x1 Cu. Current source 3301 is also scaled accordingly.

FIG. 34 shows a current summer 3400 that may be used to sum the outputs of the VMM during a verify operation. The current summer 3400 includes a current source 3401, a switch 3402, switches 3403 and 3404, and a switch 3405. As shown for the example of a 4-bit digital value, a current source circuit is present to hold the values from the 4 evaluation pulses, wherein these values are added at the end of the process. The current sources are scaled based on 2^ n + DINn bit positions; for example, I _ DIN3 ═ x8 Icell units, I _ DIN2 ═ x4 Icell units, I _ DIN1 ═ x2 Icell units, and I _ DIN0 ═ x1 Icell units.

Fig. 35 shows a digital summer 3500 that receives a plurality of digital values, sums them and generates an output DOUT that represents the sum of the inputs. Digital summer 3500 may be used during a verification operation. As shown for the example of a 4-bit digital value, there is a digital output bit to hold the values from the 4 evaluation pulses, where these values are added at the end of the process. Digitally scaling the digital output based on the 2^ n + DINn bit positions; for example, DOUT3 ═ x8 DOUT0, _ DOUT2 ═ x4 DOUT1, I _ DOUT1 ═ x2 DOUT0, and I _ DOUT0 ═ DOUT 0.

Fig. 36A shows an integrating-type double-slope ADC 3600 applied to an output neuron to convert a cell current to a digital output bit. An integrator composed of an integrating operational amplifier 3601 and an integrating capacitor 3602 integrates the cell current ICELL and the reference current IREF. As shown in fig. 36B, during a fixed time t1, the cell current is integrated up (Vout rises), and then integrated down (Vout falls) by applying the reference current for time t 2. The current Icell is t2/t1 IREF. For example, for t1, for a 10-bit digital bit resolution, 1024 cycles are used, and for t2, the number of cycles varies from 0 to 1024 cycles depending on the Icell value.

Fig. 36C shows an integrating single slope ADC 3660 applied to the output neuron to convert the cell current to a digital output bit. The cell current ICELL is integrated by an integrator composed of an integration type operational amplifier 3661 and an integration type capacitor 3662. As shown in fig. 36D, during time t1, the cell current is up-integrated (Vout rises until it reaches Vref2), and during time t2, another cell current is up-integrated. The cell current icell is Cint Vref 2/t. The pulse counter is used to count the number of pulses (digital output bits) during the integration time t. For example, as shown, the digital output bit of t1 is less than the digital output bit of t2, which means that the cell current during t1 is greater than the cell current during t2 integration. An initial calibration is performed to calibrate the integrating capacitor value Cint Tref Iref/Vref2 using a reference current and a fixed time.

Fig. 36E shows an integrating dual slope ADC 3680 applied to the output neuron to convert the cell current to a digital output bit. The integral type dual slope ADC 3680 does not use an integral type operational amplifier. The cell current or reference current is directly integrated for capacitor 3682. The pulse counter is used to count pulses (digital output bits) during the integration time. The current Icell is t2/t1 IREF.

Fig. 36F shows an integrating single slope ADC 3690 applied to an output neuron to convert the cell current to a digital output bit. The integral type single slope ADC 3680 does not use an integral type operational amplifier. The cell current is directly integrated for capacitor 3692. The pulse counter is used to count pulses (digital output bits) during the integration time. The cell current icell is Cint Vref 2/t.

Fig. 37A shows a SAR (successive approximation register) ADC applied to an output neuron to convert a cell current into a digital output bit. The cell current may drop across the resistor for conversion to VCELL. Alternatively, the cell current may charge the S/H capacitor for conversion to VCELL. A binary search is used to compute the bits starting from the MSB bit (most significant bit). Based on the digital bits from SAR 3701, DAC 3702 is used to set an appropriate analog reference voltage to comparator 3703. The output of comparator 3703 is then fed back to SAR 3701 to select the next analog level. As shown in fig. 37B, for the example of a 4-bit digital output bit, there are 4 evaluation periods: the first pulse evaluates DOUT3 by setting the analog level in the middle, then the second pulse evaluates DOUT2 by setting the analog level in the middle of the top half or in the middle of the bottom half, and so on.

Fig. 38 shows a sigma-delta ADC 3800 applied to an output neuron to convert the cell current to a digital output bit. The integrator, which is composed of the operational amplifier 3801 and the capacitor 3805, integrates the sum of the current from the selected cell current and the reference current from the 1-bit current DAC 3804. The comparator 3802 compares the integrated output voltage with a reference voltage. The clocked DFF3803 provides a digital output stream based on the output of the comparator 3802. The digital output stream typically enters a digital filter before being output to the digital output bits.

It should be noted that as used herein, the terms "above …" and "above …" both inclusively encompass "directly on …" (with no intermediate material, element, or space disposed therebetween) and "indirectly on …" (with intermediate material, element, or space disposed therebetween). Similarly, the term "adjacent" includes "directly adjacent" (no intermediate material, element, or space disposed therebetween) and "indirectly adjacent" (intermediate material, element, or space disposed therebetween), "mounted to" includes "directly mounted to" (no intermediate material, element, or space disposed therebetween) and "indirectly mounted to" (intermediate material, element, or space disposed therebetween), and "electrically coupled" includes "directly electrically coupled to" (no intermediate material or element therebetween that electrically connects the elements together) and "indirectly electrically coupled to" (intermediate material or element therebetween that electrically connects the elements together). For example, forming an element "over a substrate" can include forming the element directly on the substrate with no intervening materials/elements therebetween, as well as forming the element indirectly on the substrate with one or more intervening materials/elements therebetween.

The claims (modification according to treaty clause 19)

1. A method of programming a selected non-volatile memory cell to store one of N possible values, where N is an integer greater than 2, the selected non-volatile memory cell including a floating gate, the method comprising:

performing a coarse programming process comprising:

selecting one of M different current values as a first threshold current value, where M < N;

adding charge to the floating gate; and is

Repeating the adding step until a current through the selected non-volatile memory cell is less than or equal to the first threshold current value during a verify operation; and

a precise programming process is performed until a current through the selected non-volatile memory cell is less than or equal to a second threshold current value during a verify operation.

2. The method of claim 1, further comprising:

a second precise programming process is performed until a current through the selected non-volatile memory cell is less than or equal to a third threshold current value during a verify operation.

3. The method of claim 1, wherein the precise programming process comprises applying voltage pulses of increasing magnitude to control gates of the selected non-volatile memory cells.

4. The method of claim 1, wherein the precise programming process comprises applying voltage pulses of increasing duration to the control gates of the selected non-volatile memory cells.

5. The method of claim 2, wherein the second precise programming process comprises applying voltage pulses of increasing magnitude to the control gates of the selected non-volatile memory cells.

6. The method of claim 2, wherein the second precise programming process comprises applying voltage pulses of increasing duration to the control gates of the selected non-volatile memory cells.

7. The method of claim 1, wherein the selected non-volatile memory cell comprises a floating gate.

8. The method of claim 7, wherein the selected non-volatile memory cell is a split gate flash memory cell.

9. The method of claim 1, wherein the selected non-volatile memory cell is in a vector-matrix multiplication array in an analog memory deep neural network.

10. The method of claim 1, further comprising:

prior to performing the coarse programming process:

programming the selected non-volatile memory cell to a "0" state; and

erasing the selected non-volatile memory cell to a weak erase level.

11. The method of claim 1, further comprising:

prior to performing the coarse programming process:

erasing the selected non-volatile memory cell to a "1" state; and

programming the selected non-volatile memory cell to a weak programming level.

12. The method of claim 1, further comprising:

performing a read operation on the selected non-volatile memory cell;

integrating the current consumed by the selected non-volatile memory cell during the read operation using an integrating analog-to-digital converter to generate a digital bit.

13. The method of claim 1, further comprising:

performing a read operation on the selected non-volatile memory cell;

converting the current consumed by the selected non-volatile memory cell during the read operation to a digital bit using a sigma-delta type analog-to-digital converter.

14. A method of programming a selected non-volatile memory cell to store one of N possible values, where N is an integer greater than 2, the selected non-volatile memory cell including a floating gate and a control gate, the method comprising:

performing a coarse programming process comprising:

determining a slope value based on a change in voltage of the control gate of the selected non-volatile memory cell and a change in current consumed by the selected non-volatile memory cell;

determining a next program voltage value based on the slope value;

adding an amount of charge from the floating gate of the selected non-volatile memory cell until a current through the selected non-volatile memory cell during a verify operation is less than or equal to a first threshold current value; and

a precise programming process is performed until a current through the selected non-volatile memory cell is less than or equal to a second threshold current value during a verify operation.

15. The method of claim 14, further comprising:

a second precise programming process is performed until a current through the selected non-volatile memory cell is less than or equal to a third threshold current value during a verify operation.

16. The method of claim 14, wherein the precise programming process comprises applying voltage pulses of increasing magnitude to the control gates of the selected non-volatile memory cells.

17. The method of claim 14, wherein the precise programming process includes applying voltage pulses of increasing duration to the control gates of the selected non-volatile memory cells.

18. The method of claim 15, wherein the precise programming process comprises applying voltage pulses of increasing magnitude to the control gates of the selected non-volatile memory cells.

19. The method of claim 15, wherein the precision programming process comprises applying voltage pulses of increasing duration to the control gates of the selected non-volatile memory cells.

20. The method of claim 14, wherein the step of determining a slope value comprises:

applying a program voltage to the control gate of the selected non-volatile memory cell;

applying a first current through the selected non-volatile memory cell and determining a first voltage of the control gate;

applying a second current through the selected non-volatile memory cell and determining a second voltage of the control gate; and is

Calculating the slope value by dividing a difference between the second voltage and the first voltage by a difference between the second current and the first current.

21. The method of claim 14, wherein the selected non-volatile memory cell is a split gate flash memory cell.

22. The method of claim 14, wherein the selected non-volatile memory cell is in a vector-matrix multiplication array in an analog memory deep neural network.

23. The method of claim 14, further comprising:

prior to performing the coarse programming process:

programming the selected non-volatile memory cell to a "0" state; and

erasing the selected non-volatile memory cell to a weak erase level.

24. The method of claim 14, further comprising:

prior to performing the coarse programming process:

erasing the selected non-volatile memory cell to a "1" state; and

programming the selected non-volatile memory cell to a weak programming level.

25. The method of claim 14, further comprising:

performing a read operation on the selected non-volatile memory cell;

integrating the current consumed by the selected non-volatile memory cell during the read operation using an integrating analog-to-digital converter to generate a digital bit.

26. The method of claim 14, further comprising:

performing a read operation on the selected non-volatile memory cell;

converting the current consumed by the selected non-volatile memory cell during the read operation to a digital bit using a sigma-delta type analog-to-digital converter.

27. A method of programming a selected non-volatile memory cell to store one of N possible values, where N is an integer greater than 2, the selected non-volatile memory cell including a floating gate and a control gate, the method comprising:

performing a coarse programming process comprising:

applying a program voltage to the control gate of the selected non-volatile memory cell;

repeating the applying step and increasing the programming voltage by an incremental voltage each time the applying step is performed until a current through the selected non-volatile memory cell is less than or equal to the threshold current value during a verify operation; and

a precise programming process is performed until a current through the selected non-volatile memory cell is less than or equal to a second threshold current value during a verify operation.

28. The method of claim 27, further comprising:

a precise programming process is performed until a current through the selected non-volatile memory cell is less than or equal to a third threshold current value during a verify operation.

29. The method of claim 27, wherein the precision programming process comprises applying voltage pulses of increasing magnitude to the control gates of the selected non-volatile memory cells.

30. The method of claim 27, wherein the precision programming process includes applying voltage pulses of increasing duration to the control gates of the selected non-volatile memory cells.

31. The method of claim 28, wherein the precision programming process comprises applying voltage pulses of increasing magnitude to the control gates of the selected non-volatile memory cells.

32. The method of claim 28, wherein the precision programming process includes applying voltage pulses of increasing duration to the control gates of the selected non-volatile memory cells.

33. The method of claim 27, wherein the selected non-volatile memory cell comprises a floating gate.

34. The method of claim 33, wherein the selected non-volatile memory cell is a split gate flash memory cell.

35. The method of claim 27, wherein the selected non-volatile memory cell is in a vector-matrix multiplication array in an analog memory deep neural network.

36. The method of claim 27, further comprising:

prior to performing the coarse programming process:

programming the selected non-volatile memory cell to a "0" state; and

erasing the selected non-volatile memory cell to a weak erase level.

37. The method of claim 27, further comprising:

prior to performing the coarse programming process:

erasing the selected non-volatile memory cell to a "1" state; and

programming the selected non-volatile memory cell to a weak programming level.

38. The method of claim 27, further comprising:

performing a read operation on the selected non-volatile memory cell;

integrating the current consumed by the selected non-volatile memory cell during the read operation using an integrating analog-to-digital converter to generate a digital bit.

39. The method of claim 27, further comprising:

performing a read operation on the selected non-volatile memory cell;

converting the current consumed by the selected non-volatile memory cell during the read operation to a digital bit using a sigma-delta type analog-to-digital converter.

40. A method of reading a selected non-volatile memory cell that stores one of N possible values, where N is an integer greater than 2, the method comprising:

applying a digital input pulse to the selected non-volatile memory cell;

in response to each of the digital input pulses, determining a value stored in the selected non-volatile memory cell based on an output of the selected non-volatile memory cell.

41. The method of claim 40, wherein the number of digital input pulses corresponds to a binary value.

42. The method of claim 40, wherein the number of digital input pulses corresponds to a digital bit position value.

43. The method of claim 40, wherein the determining step comprises receiving an output neuron in an integral analog-to-digital converter to generate a digital bit indicative of the value stored in the non-volatile memory cell.

44. The method of claim 40, wherein the determining step comprises receiving an output neuron in a successive approximation register analog-to-digital converter to generate a digital bit indicative of the value stored in the non-volatile memory cell.

45. The method of claim 40, wherein the output is a current.

46. The method of claim 40, wherein the output is a charge.

47. The method of claim 40, wherein the output is a digital bit.

48. The method of claim 40, wherein the selected non-volatile memory cell comprises a floating gate.

49. The method of claim 48, wherein the selected non-volatile memory cell is a split gate flash memory cell.

50. The method of claim 40, wherein the selected non-volatile memory cell is in a vector-matrix multiplication array in an analog memory deep neural network.

51. A method of reading a selected non-volatile memory cell that stores one of N possible values, where N is an integer greater than 2, the method comprising:

applying an input to the selected non-volatile memory cell;

in response to the input, determining, using an analog-to-digital converter circuit, a value stored in the selected non-volatile memory cell based on an output of the selected non-volatile memory cell.

52. The method of claim 51, wherein the input is a digital input.

53. The method of claim 51, wherein the input is an analog input.

54. The method of claim 51, wherein the determining step comprises receiving an output neuron in an integral single-slope or dual-slope analog-to-digital converter and generating a digital bit indicative of the value stored in the non-volatile memory cell.

55. The method of claim 51, wherein the determining step includes receiving an output neuron in a SAR analog-to-digital converter to generate a digital bit indicative of the value stored in the non-volatile memory cell.

56. The method of claim 51, wherein the determining step comprises receiving an output neuron in a sigma-delta type analog-to-digital converter to generate a digital bit indicative of the value stored in the non-volatile memory cell.

57. The method of claim 51, wherein the selected non-volatile memory cell comprises a floating gate.

58. The method of claim 51, wherein the selected non-volatile memory cell is a split gate flash memory cell.

59. The method of claim 51, wherein the selected non-volatile memory cell is in a vector-matrix multiplication array in an analog memory deep neural network.

60. The method of claim 51, wherein the selected non-volatile memory cell operates in a sub-threshold region.

61. The method of claim 51, wherein the selected non-volatile memory cell operates in a linear region.

75页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:用于多个数据存储体的共享错误校验及校正逻辑

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类