Analog-to-digital converter using memory in neural network

文档序号:1220307 发布日期:2020-09-04 浏览:10次 中文

阅读说明:本技术 在神经网络中使用存储器的模拟数字转换器 (Analog-to-digital converter using memory in neural network ) 是由 罗艾·丹尼尔 沙哈尔·柯瓦丁斯基 于 2018-11-14 设计创作,主要内容包括:一种模拟数字转换装置,包括一输入端,用于接收一模拟输入信号;多个输出端,用于输出代表所述模拟输入信号的一数字信号的多个平行位元;训练神经网络层,包含分别在所述多个输出端的每一个之间的多个连接,每一个连接具有一权重,所述权重为可调,神经网络的突触是忆阻器,训练使用在线梯度下降。(An analog-to-digital conversion apparatus includes an input terminal for receiving an analog input signal; a plurality of output terminals for outputting a plurality of parallel bits of a digital signal representing the analog input signal; a training neural network layer comprising a plurality of connections between each of the plurality of outputs, respectively, each connection having a weight, the weight being adjustable, synapses of the neural network being memristors, the training using an online gradient descent.)

1. An analog-to-digital conversion apparatus comprising:

an input terminal for receiving an analog input signal;

a plurality of output terminals for outputting a plurality of parallel bits of a digital signal representing the analog input signal; and

a trainable neural network layer including a plurality of connections between each of the plurality of outputs, respectively, wherein each connection has a weight that is adjustable for training.

2. The apparatus of claim 1, wherein: the connection includes a plurality of adaptive synapses to respectively provide the adjustable weights.

3. The apparatus of claim 2, wherein: each adaptive synapse is provided with a respective weight, and each output bit is a comparison result of a sum of weights from a plurality of output bits.

4. The apparatus of claim 3, wherein: each of the plurality of adaptive synapses includes a memristor provided with a respective weight.

5. The apparatus of claim 4, wherein: the apparatus also includes a training unit having a training data set input and connected to the plurality of output bits and configured to adjust each of the plurality of adaptive synapses until a plurality of outputs of a given input correspond to a training data set.

6. The apparatus of claim 5, wherein: the training data set is used in combination with a predetermined maximum voltage and a predetermined number of the plurality of output bits.

7. The apparatus of claim 5 or 6, wherein: the apparatus is configured to adjust by using an online gradient descent.

8. The apparatus of claim 7, wherein: the online gradient descent includes the equation for a plurality of iterations k:

where η is the learning rate, Vi(k)Is at kthA single empirical sample provided to the input in an iteration, and the equation:

Figure FDA0002547356240000022

9. the apparatus of any of the preceding claims 5 to 8, wherein: the adjusting includes minimizing a training error function and an energy cost function.

10. The apparatus of any of the preceding claims 5 to 9, wherein: the adjusting includes minimizing a figure of merit, the figure of merit being:

Figure FDA0002547356240000023

where P is the power consumption during the conversion, fs is the sampling frequency, and;

Figure FDA0002547356240000024

where SNDR is the signal-to-noise ratio.

11. An analog-to-digital conversion method, comprising the steps of:

receiving an analog input signal;

outputting a plurality of parallel bits of a digital signal representing the analog input signal at a plurality of output terminals;

providing a plurality of connections between each of said plurality of outputs; and

providing adjustable weights to each connection to provide a trainable neural network connecting the plurality of outputs.

12. The method of claim 11, wherein: the plurality of connections form a neural network, and the providing adjustable weights includes adapting a synapse of the neural network.

13. The method of claim 12, wherein: setting each adaptive synapse weight separately such that each output bit is a comparison result of a sum of weights from the plurality of output bits.

14. The method of claim 13, wherein: each of the plurality of adaptive synapses includes a memristor provided with a respective weight.

15. The method of claim 13 or 14, wherein: the method includes setting respective weights using training, the training including adjusting each of the plurality of adaptive synapses until a plurality of outputs for a given input correspond to a training data set.

16. The method of claim 15, wherein: the training data set is used in combination with a predetermined maximum voltage and a predetermined number of the plurality of output bits.

17. The method of claim 16, wherein: the method includes adjusting by using an online gradient descent.

18. The method of claim 17, wherein: the online gradient descent includes the equation for a plurality of iterations k:

where η is the learning rate, Vi(k)Is at kthA single empirical sample provided to the input in an iteration, and the equation:

Figure FDA0002547356240000032

19. the method of any of the preceding claims 15 to 18, wherein: the adjusting includes minimizing a training error function and an energy cost function.

20. The method of any of the above claims 15 to 19, wherein: the adjusting includes minimizing a figure of merit, the figure of merit being:

where P is the power consumption during the conversion, fs is the sampling frequency, and;

where SNDR is the signal-to-noise ratio.

Technical Field

In certain embodiments, the present disclosure relates to analog-to-digital converters (ADCs) using memristors in neural networks.

Background

The rapid development of the data driving system to the internet of things era lays a road for the ubiquitous emergency communication and changing application of data converters. With the advent of high speed, high precision and low power mixed signal systems, there is an increasing demand for accurate, fast and power efficient data converters. These systems operate on various real-world continuous-time signals; examples include medical imaging, biosensors, wearable devices, consumer electronics, automobiles, instrumentation, and radio communications. Unfortunately, the speed-power-accuracy trade-offs inherent in analog-to-digital converters (ADCs) have moved them away from the range of applications of interest. Furthermore, as moore's law continues to push technology scale down, the worrying deep-submicron effects (sub-micron effects), this tradeoff has become a long-term bottleneck in modern system design. These effects are not handled well using special design techniques that overload the data converter and create significant overhead, aggravate the trade-off and severely degrade its performance. Today, data converters lack design criteria and are customized to complex specific design flows and architectures for special-purpose applications.

Disclosure of Invention

The present embodiment includes an analog-to-digital converter that groups samples the input and trains a particular signal using a neural network based on memristor components. The output bit and the current input are used as inputs to the neural network to generate a new output bit.

According to an aspect of some embodiments of the present invention, there is provided an analog-to-digital conversion apparatus including:

an input terminal for receiving an analog input signal;

a plurality of output terminals for outputting a plurality of parallel bits of a digital signal representing the analog input signal; and

a trainable neural network layer including a plurality of connections between each of the plurality of outputs, respectively, wherein each connection has a weight that is adjustable for training.

In one embodiment, the connection includes a plurality of adaptive synapses (adaptive synapses) to provide the adjustable weights.

In one embodiment, each adaptive synapse is provided with a respective weight, and each output bit is a comparison result of a sum of weights from a plurality of output bits.

In an embodiment, each of the plurality of adaptive synapses comprises a memristor provided with a respective weight.

Drawings

Some embodiments of the invention are described herein, by way of example only, with reference to the accompanying drawings. With specific reference now to the drawings in detail, it is stressed that the particulars shown are by way of example and for purposes of illustrative discussion of the embodiments of the present invention. In this regard, it will be apparent to those skilled in the art from this description, taken in conjunction with the accompanying drawings, how embodiments of the present invention may be practiced.

In the drawings:

FIG. 1 shows a simplified block diagram of a trainable ADC in accordance with an embodiment of the present invention.

Fig. 2 shows a simplified flow diagram of how input and training signals are used for neural network training according to the present embodiment.

Fig. 3a) to 3d) show four graphs of the trade-offs involved in the optimization in the prior art.

Fig. 4 shows a graph comparing training efficiency in the prior art and the present embodiment.

Fig. 5a) and 5b) are a flow chart and generalized circuit for successive approximation according to an embodiment of the present invention.

Fig. 6a) to 6c) are three components of the circuit of fig. 5 b).

Fig. 7a) to 7d) are four diagrams showing aspects of the training process according to the present embodiment.

Fig. 8a) to 8d) are four graphs showing the efficiency of the training process of fig. 7a) to 7 d).

Fig. 9a) to 9d) are four graphs showing the speed-power-accuracy trade-off according to the present embodiment.

Embodiments may further include a training unit having a training data set input and connected to the plurality of output bits and configured to adjust each of the plurality of adaptive synapses until a plurality of outputs of a given input correspond to a training data set.

In one embodiment, the training data set is used in combination with a predetermined maximum voltage and a predetermined number of the plurality of output bits.

38页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:类神经电路以及运作方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!