Cell location independent linear weight updateable CMOS synapse array

文档序号:555562 发布日期:2021-05-14 浏览:29次 中文

阅读说明:本技术 不依赖于单元位置的线性权重可更新的cmos突触阵列 (Cell location independent linear weight updateable CMOS synapse array ) 是由 石井正俊 细川浩二 冈﨑笃也 岩科晶代 于 2019-10-02 设计创作,主要内容包括:神经形态电路(500)包括交叉开关突触阵列单元。所述交叉开关突触阵列单元包括互补金属氧化物半导体(CMOS)晶体管(T6),所述CMOS晶体管(T6)的导通电阻由所述CMOS晶体管(T6)的栅极电压控制,以更新所述交叉开关突触阵列单元的权重。神经形态电路(500)还包括一组行线,所述一组行线分别将所述突触阵列单元与所述突触阵列单元的第一端的多个突触前神经元串联连接。神经形态电路(500)还包括一组列线,所述一组列线分别将所述突触阵列单元与所述突触阵列单元的第二端的多个突触后神经元串联连接。通过执行电荷共享技术来控制所述CMOS晶体管(T6)的所述栅极电压,所述电荷共享技术使用与所述一组行线和所述一组列线对齐的单元控制线上的不重叠脉冲来更新所述交叉开关突触阵列单元的所述权重。(The neuromorphic circuit (500) includes a cross-switch synapse array element. The cross-switch synapse array cells comprise Complementary Metal Oxide Semiconductor (CMOS) transistors (T6), an on-resistance of the CMOS transistors (T6) being controlled by a gate voltage of the CMOS transistors (T6) to update weights of the cross-switch synapse array cells. The neuromorphic circuit (500) further includes a set of row lines respectively connecting the synaptic array cell in series with a plurality of pre-synaptic neurons at a first end of the synaptic array cell. The neuromorphic circuit (500) further includes a set of column lines respectively connecting the synaptic array cell in series with a plurality of post-synaptic neurons of a second end of the synaptic array cell. Controlling the gate voltage of the CMOS transistor (T6) by performing a charge sharing technique that updates the weights of the crossbar synaptic array cells using non-overlapping pulses on cell control lines aligned with the set of row lines and the set of column lines.)

1. A neuromorphic circuit comprising:

a crossbar array unit comprising Complementary Metal Oxide Semiconductor (CMOS) transistors having on-resistances controlled by gate voltages of the CMOS transistors to update weights of the crossbar array unit;

a set of row lines respectively connecting the synaptic array cell in series with a plurality of pre-synaptic neurons of a first end of the synaptic array cell; and

a set of column lines respectively connecting the synaptic array unit in series with a plurality of post-synaptic neurons of a second end of the synaptic array unit,

wherein the gate voltage of the CMOS transistor is controlled by performing a charge sharing technique that updates the weights of the crossbar synapse array cells using non-overlapping pulses on cell control lines aligned with the set of row lines and the set of column lines.

2. The neuromorphic circuit of claim 1, wherein the cross-switch synaptic array cell comprises three capacitors C1, C2, and C3 to update the gate voltage, and the charge sharing technique is performed row-by-row such that the gate voltage is updated incrementally by using the capacitors C1 and C3, and the gate voltage is updated decrementally by using the capacitors C2 and C3.

3. The neuromorphic circuit of claim 2, wherein the neuromorphic circuit further comprises:

a pair of series connected p-type field effect transistors (pFETs);

a pair of nfets in series are provided,

wherein one terminal of the capacitor C1 is connected to a common point between the pair of pFETs connected in series, and one terminal of the capacitor C2 is connected to a common point between the pair of nFETs connected in series.

4. The neuromorphic circuit of claim 2, wherein one terminal of the capacitor C3 is connected to a common point between the pair of pfets and the pair of nfets.

5. The neuromorphic circuit of claim 2, wherein the synapse array unit comprises:

a pair of series connected p-type field effect transistors (pFETs) for regulating the capacitors C1 and C3; and

a pair of nfets connected in series with each other and to the pair of pfets for regulating the capacitors C2 and C3.

6. The neuromorphic circuit of claim 5, wherein the synapse unit further comprises a connection point connecting the pair of pFETs in series to the pair of nFETs, and further connected to one terminal of the capacitor C3 and the gate of the CMOS transistor.

7. The neuromorphic circuit of claim 5, wherein the neuromorphic chip executes the charge-sharing technique such that an update increment line and a clock increment line are switched using the non-overlapping pulses to update in increments.

8. The neuromorphic circuit of claim 5, wherein the neuromorphic chip executes the charge-sharing technique such that an update decrement line and a clock increment line are switched using the non-overlapping pulses to update in a decrement manner.

9. The neuromorphic circuit of claim 5, wherein the capacitors C1 and C2 are replaced by parasitic capacitances of the pair of pFETs and the pair of nFETs, respectively.

10. A neuromorphic chip comprising an array of synapses formed by a plurality of neuromorphic circuits according to any preceding claim.

11. A method, comprising:

forming a cross-switch synapse array cell comprising complementary metal-oxide-semiconductor (CMOS) transistors having an on-resistance controlled by a gate voltage of the CMOS transistors to update weights of the cross-switch synapse array cell;

forming a set of row lines respectively connecting the synaptic array cell in series with a plurality of pre-synaptic neurons of a first end of the synaptic array cell; and

forming a set of column lines respectively connecting the synaptic array unit in series with a plurality of post-synaptic neurons of a second end of the synaptic array unit,

wherein the gate voltage of the CMOS transistor is controlled by performing a charge sharing technique that updates the weights of the crossbar synapse array cells using non-overlapping pulses on cell control lines aligned with the set of row lines and the set of column lines.

12. The method of claim 11, wherein the crossbar array cell is formed to include three capacitors C1, C2, and C3 to update the gate voltage, and the charge sharing technique is performed row by row such that the gate voltage is updated incrementally by using the capacitors C1 and C3 and is updated decrementally by using the capacitors C2 and C3.

Technical Field

The present invention relates generally to machine learning and more particularly to a linear weight updateable CMOS synapse array independent of cell location.

Description of the background Art

Analog multiply-add accelerators gain significant interest due to their power efficiency. Different synaptic elements are being developed, such as PCM, RRAM, MRAM, and the like. Analog transistors are one of the options for synaptic elements because of their linearity in the triode region. However, conventional techniques unintentionally generate different pulse shapes at the proximal and distal ends of the synapse array. This then results in different weight update characteristics across the cell array. Therefore, a solution is needed for updating characteristic changes due to the weight of cell locations in the synapse array.

Background

Disclosure of Invention

According to one aspect of the invention, a neuromorphic circuit is provided. The neuromorphic circuit includes a cross-switch synapse array element. The cross-switch synapse array cells comprise complementary metal-oxide-semiconductor (CMOS) transistors whose on-resistances are controlled by gate voltages of the CMOS transistors to update weights of the cross-switch synapse array cells. The neuromorphic circuit further includes a set of row lines respectively connecting the synaptic array cell in series with a plurality of pre-synaptic neurons at a first end of the synaptic array cell. The neuromorphic circuit further includes a set of column lines respectively connecting the synaptic array cell in series with the plurality of post-synaptic neurons of the second end of the synaptic array cell. Controlling the gate voltage of the CMOS transistor by performing a charge sharing technique that updates the weights of the crossbar synapse array cells using non-overlapping pulses on cell control lines aligned with the set of row lines and the set of column lines.

According to another aspect of the present invention, a neuromorphic chip is provided. The neuromorphic chip includes an array of synapses formed by cross-switch synapse array elements. Each crossbar synapse array cell includes a complementary metal-oxide-semiconductor (CMOS) transistor having an on-resistance controlled by a gate voltage of the CMOS transistor to update each weight of the crossbar synapse array cell. Each cross-switch synapse array unit further comprises a set of row lines respectively connecting the synapse array unit in series with a plurality of pre-synaptic neurons at a first end of the synapse array unit. Each cross-switch synaptic array unit further comprises a set of column lines respectively connecting the synaptic array unit in series with a plurality of post-synaptic neurons at a second end of the synaptic array unit. Controlling the gate voltage of the CMOS transistor by performing a charge sharing technique that updates the weights of the crossbar synapse array cells using non-overlapping pulses on cell control lines aligned with the set of row lines and the set of column lines.

According to yet another aspect of the invention, a method is provided. The method includes forming a cross-switch synapse array cell comprising complementary metal-oxide-semiconductor (CMOS) transistors having on-resistances controlled by gate voltages of the CMOS transistors to update weights of the cross-switch synapse array cell. The method also includes forming a set of row lines respectively connecting the synapse array element in series with a plurality of pre-synaptic neurons at a first end of the synapse array element. The method also includes forming a set of column lines respectively connecting the synaptic array cell in series with a plurality of post-synaptic neurons of a second end of the synaptic array cell. Controlling the gate voltage of the CMOS transistor by performing a charge sharing technique that updates the weights of the crossbar synapse array cells using non-overlapping pulses on cell control lines aligned with the set of row lines and the set of column lines.

These and other features and advantages will become apparent from the following detailed description of illustrative embodiments thereof, which is to be read in connection with the accompanying drawings.

Drawings

The following description will provide details of preferred embodiments with reference to the following drawings, in which:

FIG. 1 is a block diagram illustrating a processing system to which the present invention may be applied;

FIG. 2 is a block diagram illustrating an environment in which the present invention may be applied;

FIG. 3 is a block diagram illustrating another environment in which the present invention may be applied;

FIG. 4 is a diagram illustrating waveforms at the proximal and distal ends of an array of synapses in accordance with an embodiment of the present disclosure;

FIG. 5 is a block diagram illustrating a neuromorphic unit circuit according to an embodiment of the present invention;

FIG. 6 is a timing diagram illustrating exemplary timing associated with the circuit of FIG. 5;

FIG. 7 is a timing diagram illustrating another timing sequence associated with the circuit of FIG. 5;

FIG. 8 is a block diagram illustrating a neuromorphic array formed by the plurality of circuits of FIG. 5;

FIG. 9 is a timing diagram showing pulses applied to the array of FIG. 8;

FIG. 10 is a flow diagram illustrating a CMOS synapse array method for linear weight updateable without cell location dependency in accordance with an embodiment of the invention;

FIGS. 11-12 are graphs for case 1-PA, with the gate voltage of transistor T6 increased and a pulse having a width of 0.8ns at 500nA used;

FIGS. 13-14 are graphs for case 1-PA, increasing the gate voltage of transistor T6 and using a pulse having a width of 1.6ns at 250 nA;

FIGS. 15-16 are graphs for case 1-PA, increasing the gate voltage of transistor T6 and using a pulse having a width of 3.2ns at 125 nA;

FIGS. 17-18 are graphs for case 1-PA, increasing the gate voltage of transistor T6 and using a pulse having a width of 6.4ns at 62.5 nA;

fig. 19-20 are graphs for case 1-PA, increasing the gate voltage of transistor T6 and using a pulse with a width of 12.8ns at 31.25 nA;

FIGS. 21-22 are graphs for case 1-PA, increasing the gate voltage of transistor T6 and using a pulse with a width of 25.6ns at 15.625 nA;

FIGS. 23-25 are graphs for case 1-PI, which increase the gate voltage of transistor T6 and use a pulse width of 1.3 ns;

fig. 26 to 27 are graphs for case 1 to PA, which reduce the gate voltage of the transistor T6 and use a pulse having a width of 0.8ns at 500 nA;

fig. 28-29 are graphs for case 1-PA, which reduces the gate voltage of transistor T6 and uses a pulse with a width of 1.6ns at 250 nA;

FIGS. 30-31 are graphs for case 1-PA, which reduces the gate voltage of transistor T6 and uses a pulse having a width of 3.2ns at 125 nA;

fig. 32-33 are graphs for case 1-PA, which lowers the gate voltage of transistor T6 and uses a pulse with a width of 6.4ns at 62.5 nA;

FIGS. 34-35 are graphs for case 1-PA, which reduces the gate voltage of transistor T6 and uses a pulse with a width of 12.8ns at 31.25 nA;

FIGS. 36-37 are graphs for case 1-PA, which reduces the gate voltage of transistor T6 and uses a pulse with a width of 25.6ns at 15.625 nA;

38-40 are graphs for case 1-PI that reduce the gate voltage of transistor T6 and use a pulse having a width of 3.1 ns;

FIG. 41 is a block diagram illustrating a cloud computing environment having one or more cloud computing nodes with which local computing devices used by cloud consumers communicate, according to an embodiment of the present invention; and

FIG. 42 is a block diagram illustrating a set of functional abstraction layers provided by a cloud computing environment, according to an embodiment of the present invention.

Detailed Description

The invention relates to a CMOS synapse array with updateable linear weights without cell location dependency.

In one embodiment, a charge sharing technique is used for weight updating by using non-overlapping pulses from vertical and horizontal control lines. In one embodiment, the present invention involves the addition of one pFET and one nFET in a synaptic unit cell, which enables charge sharing techniques and provides the final fast overall operation.

Traditionally, pulse shape differences at the proximal and distal ends can be suppressed by using wider pulse widths, which undesirably reduces operating speed. In contrast, the present invention advantageously improves the operation speed and maintains almost the same weight update characteristic over the entire cell array.

Furthermore, the present invention does not require global bias circuits and their distribution.

Moreover, the present invention is easy to implement because it is relatively easy to maintain non-overlapping pulses that maintain pulse shapes on the cell array from a circuit design point of view.

Given the teachings of the present invention provided herein, one of ordinary skill in the related art can readily ascertain these and other advantages of the many attendant advantages of the present invention, while maintaining the spirit of the present invention.

FIG. 1 is a block diagram illustrating an exemplary processing system 100 to which the present invention may be applied. Processing system 100 includes a set of processing units (e.g., CPUs) 101, a set of GPUs 102, a set of memory devices 103, a set of communication devices 104, and a set of peripheral devices 105. The CPU101 may be a single core or multi-core CPU. The GPU102 may be a single core or multi-core GPU. The one or more memory devices 103 may include cache, RAM, ROM, and other memory (flash, optical, magnetic, etc.). The communication device 104 may include a wireless and/or wired communication device (e.g., a network (e.g., WIFI, etc.) adapter, etc.). Peripheral devices 105 may include display devices, user input devices, printers, imaging devices, and the like. The elements of processing system 100 are connected by one or more buses or networks, collectively represented by reference numeral 110.

Of course, the processing system 100 may also include other elements (not shown), as well as omit certain elements, as readily contemplated by those skilled in the art. For example, as one of ordinary skill in the art will readily appreciate, various other input devices and/or output devices may be included in the processing system 100, depending on the particular implementation of the processing system 100. For example, different types of wireless and/or wired input and/or output devices may be used. Moreover, additional processors, controllers, memories, etc. in different configurations may also be utilized, as will be readily appreciated by one of ordinary skill in the art. Further, in another embodiment, a cloud configuration may be used (e.g., see fig. 9-10). These and other variations of the processing system 100 are readily contemplated by one of ordinary skill in the art, given the teachings of the present invention provided herein.

Further, it should be understood that the different figures, as described below with respect to the different elements and steps associated with the present invention, may be implemented in whole or in part by one or more of the elements of system 100.

A description will now be given of two exemplary environments 200 and 300 to which the present invention may be applied, according to various embodiments of the present invention. Environments 200 and 300 are described below with respect to fig. 2 and 3, respectively. In more detail, environment 200 includes a learning-based prediction system operatively coupled to a controlled system, while environment 300 includes a learning-based prediction system as part of a controlled system. Further, either of environments 200 and 300 may be part of a cloud-based environment (see, e.g., fig. 7 and 8). Given the teachings of the present invention provided herein, one of ordinary skill in the related art can readily ascertain these and other environments to which the present invention may be applied, while maintaining the spirit of the present invention.

FIG. 2 is a block diagram illustrating an environment 200 in which the present invention may be applied.

The environment 200 includes a learning-based prediction system 210 and a controlled system 220. The learning-based prediction system 210 and the controlled system 220 are configured to enable communication therebetween. For example, a transceiver and/or other type of communication device, including wireless, wired, and combinations thereof, may be used. In an embodiment, communication between learning-based prediction system 210 and controlled system 220 may be performed over one or more networks (collectively represented by diagram reference numeral 230). The communications may include, but are not limited to, multivariate time series data from the controlled system 220, and predictions and action-initiated control signals from the learning-based prediction system 210. The controlled system 220 may be any type of processor-based system such as, but not limited to, a banking system, an access system, a monitoring system, a manufacturing system (e.g., an assembly line), an Advanced Driver Assistance System (ADAS), and the like.

The controlled system 220 provides data (e.g., multivariate time series data) to the learning-based prediction system 210, which the learning-based prediction system 210 uses to make predictions.

In an embodiment, to make the prediction, the learning-based prediction system 210 may use neuron morphology circuits as described herein.

The controlled system 220 may be controlled based on the predictions generated by the learning-based prediction system 210. For example, based on a prediction that a machine will fail in x time steps, corresponding actions may be performed at t < x (e.g., powering down the machine, enabling machine safety measures to prevent injury/etc.) in order to avoid the failure from actually occurring. As another example, based on an intruder's trajectory prediction, the monitored system being controlled may lock or unlock one or more doors to secure someone somewhere (holding area) and/or direct them to a secure place (security room) and/or confine them to a restricted place, and so on. Verbal instructions (from the speaker) or displayed (on the display device) may be provided along with the locking and/or unlocking (or other action) of the door in order to guide the person. As another example, the vehicle (braking, steering, accelerating, etc.) may be controlled to avoid the predicted obstacle that will be on the motorway in response to the prediction. As yet another example, the present invention may be incorporated into a computer system in order to predict an impending failure and take action prior to the failure, such as switching a soon to fail component over to another component, routing through a different component, processing through a different component, and so forth. It is understood that the foregoing acts are illustrative only, and thus, other acts may be performed, depending upon the embodiment, as will be readily apparent to those of ordinary skill in the art given the teachings of the present invention provided herein, while maintaining the spirit of the present invention.

In an embodiment, the learning-based prediction system 210 may be implemented as a node in a cloud computing arrangement. In embodiments, a single learning-based prediction system 210 may be assigned to a single controlled system or multiple controlled systems, e.g., different robots in an assembly line, etc. These and other configurations of the elements of environment 200 are readily determined by one of ordinary skill in the art, given the teachings of the present invention provided herein, while maintaining the spirit of the present invention.

FIG. 3 is a block diagram illustrating another exemplary environment 300 in which the present invention may be applied, according to an embodiment of the present invention.

The environment 300 includes a controlled system 320, which in turn includes a learning-based prediction system 310. One or more communication buses and/or other devices may be used to facilitate inter-system as well as intra-system communication. The controlled system 320 may be any type of processor-based system such as, but not limited to, a banking system, an access system, a monitoring system, a manufacturing system (e.g., an assembly line), an Advanced Driver Assistance System (ADAS), and the like.

The operation of these elements in environments 200 and 300 is similar, except that system 310 is included in system 320. Thus, for the sake of brevity, elements 310 and 320 are not described in further detail with respect to FIG. 3, and the reader is given the common functionality of these elements in the two environments 200 and 300 with respect to the description of elements 210 and 220, respectively, with respect to environment 200 of FIG. 2.

FIG. 4 is a diagram illustrating exemplary waveforms 400 at the proximal end 411 and the distal end 412 of a synapse array 410 in accordance with an embodiment of the invention.

The waveform 400 varies in shape at the distal end 412 as compared to the proximal end 411 of the array of synapses 410. The present invention addresses and overcomes this undesirable shape change.

Fig. 5 is a block diagram illustrating a neuromorphic-unit circuit 500 according to an embodiment of the present invention. Fig. 6 is a timing diagram illustrating a timing 600 associated with the circuit 500 of fig. 5. Fig. 7 is a timing diagram illustrating another timing 700 associated with circuit 500 of fig. 5.

Referring to fig. 5, circuit 500 includes two p-type Metal Oxide Semiconductor Field Effect Transistors (MOSFETs), i.e., T1 and T2.

Circuit 500 also includes two n-type MOSFETs, i.e., T3 and T4.

The circuit 500 also includes a CMOS transistor T6.

The circuit 500 additionally includes three capacitors, namely C1, C2, and C3. In one embodiment, capacitors C1 and C2 may be MOSFET parasitic capacitances. In another embodiment, the capacitors C1 and C2 may be "intentional" CMOS capacitors.

The on-resistance controlled by the gate voltage T6 is used to update each weight of the cross-switch synaptic array cell.

A set of row lines respectively connect the synaptic array cell in series to a plurality of presynaptic neurons at a first end thereof.

A set of column lines respectively connect the synaptic array cells in series to a plurality of post-synaptic neurons at their second ends.

The gate voltage of the CMOS transistor T6 is updated by performing a charge sharing technique that updates the weights by using non-overlapping pulses. In particular, the charge sharing technique is performed row by row, whereby the gate voltage is linearly updated in an incremental and decremental manner using non-overlapping pulses in order to switch the vertical and horizontal control lines using different clock and capacitor combinations.

The gate voltage of T6 is updated by using two types of charge sharing. Fig. 6 shows one charge sharing (incrementing) of the gate voltage for the update T6, while fig. 7 shows another charge sharing (decrementing) of the gate voltage for the update T6.

Referring to fig. 6, the gate voltage of T6 is updated by using C1 and C3 so that wclk _ i and Wud _ i do not overlap. Namely, an increment line (Wclk _ i) of the clock of the transistor T1 and an update line (Wud _ i) of the transient T2.

Referring to fig. 7, the gate voltage of T6 is updated by using C2 and C3 so that wclk _ d and Wud _ d do not overlap. Namely, a decrement line (Wclk _ d) for the clock of the transistor T4 and an update line (Wud _ d) for the transistor T3.

Thus, a neuromorphic chip having one or more neuromorphic units 500 performs a charge sharing technique such that (i) the increment line (Wclk _ i) and increment update (Wud _ i) of the clock are switched using non-overlapping pulses of the clock, and the update is performed incrementally, and (ii) the decrement line (Wclk _ d) and decrement update (Wud) of the clock are switched using non-overlapping pulses of the clock and the update is performed decrementally.

Fig. 8 is a block diagram illustrating a neuromorphic array 800 formed by the plurality of neuromorphic-unit circuits 500 of fig. 5, according to an embodiment of the present invention. Fig. 9 is a timing diagram illustrating pulses 900 applied to the array 800 of fig. 8. As shown in fig. 5, the array 800 is formed of a plurality of neuromorphic cells, wherein charge sharing techniques are applied such that weight updates use a row-by-row access scheme.

FIG. 10 is a flow chart illustrating a method 1000 for a CMOS synapse array with linear weight updateable without cell location dependency in accordance with an embodiment of the invention.

At block 1010, a crossbar array cell is formed that includes CMOS transistors having on-resistances controlled by gate voltages of the CMOS transistors to update weights of the crossbar array cell.

At block 1020, a set of row lines are formed that respectively connect the synaptic array cells in series to a plurality of pre-synaptic neurons at a first end thereof.

At block 1030, a set of column lines is formed, each connecting a synapse array cell in series to a plurality of post-synaptic neurons at a second end thereof.

At block 1040, the gate voltage of the CMOS transistor is controlled by performing a charge sharing technique that updates the weights of the cross-switch synapse array cells using non-overlapping pulses on cell control lines aligned with the set of row lines and the set of column lines.

In one embodiment, block 1040 includes one or more of blocks 1040A and 1040B.

At block 1040A, a charge sharing technique is performed such that the update increment line and the clock increment line are switched using non-overlapping pulses to update in increments.

In block 1040B, a charge sharing technique is performed to perform the update in a decrementing manner using non-overlapping pulses to switch the update decrement line and the clock decrement line.

It should be appreciated that any known fabrication technique may be used to form the neuromorphic circuit and/or chip in accordance with the teachings of the present invention. Therefore, for the sake of brevity, no further description is provided herein.

It should be understood that the present invention may be included as part of a predictive system. The prediction system may in turn be part of another system (e.g., ADAS). Moreover, at least a portion of the prediction system may be implemented using a cloud configuration, as described in more detail below.

11-40 are graphs showing exemplary experimental results obtained for the case involving 128 unit loads (hereinafter interchangeably referred to as "case 1"). Some graphs show experimental results of the prior art (case 1-PA), while other graphs show experimental results according to the present invention (case 1-PI). The experimental results of the prior art relate to the use of memory cells formed from 3T1C cells.

In particular, fig. 11-25 relate to increasing the gate voltage of transistor T6, and fig. 26-40 relate to decreasing the gate voltage of transistor T6, all with respect to 128 cell loads. The various pulse widths and currents are as follows.

The key to note here is that if we use shorter pulses, the difference in weight update characteristics on the near and far ends is significant. This situation can be mitigated by using wider pulses in the prior art. However, the PA case requires a pulse width of at least about 25.6ns in order to minimize the difference, as shown in fig. 21-22. In contrast, using the present invention, this can be achieved by using a pulse width of 3.2ns as shown in FIGS. 23-25. The invention thus enables faster operation than in the case of the Prior Art (PA). This improvement is more pronounced in larger arrays. FIGS. 26-40 show similar results for the decrementing case.

Referring to fig. 11-12 (graphs 1100 and 1200, respectively), for case 1-PA and using pulses with 0.8ns width at 500 nA.

Referring to fig. 13-14 (graphs 1300 and 1400, respectively), for case 1-PA and using a pulse with a width of 1.6ns at 250 nA.

Referring to fig. 15-16 (graphs 1500 and 1600, respectively), for case 1-PA and using a pulse with a width of 3.2ns at 125 nA.

Referring to fig. 17-18 (plots 1700 and 1800, respectively), for case 1-PA and using pulses with 6.4ns width at 62.5 nA.

Referring to fig. 19-20 (graphs 1900 and 2000, respectively), for case 1-PA and using pulses with a width of 12.8ns at 31.25 nA.

Referring to fig. 21-22 (graphs 2100 and 2200, respectively), for case 1-PA and using a pulse with a width of 25.6ns at 15.625 nA.

Referring to fig. 23-25 (graphs 2300, 2400, and 2500, respectively), for case 1-PI and using a pulse with a width of 3.7 ns.

Referring to fig. 26-27 (graphs 2600 and 2700, respectively), for case 1-PA and using pulses with 0.8ns width at 500 nA.

Referring to fig. 28-29 (graphs 2800 and 2900, respectively), for case 1-PA and using a pulse with a width of 1.6ns at 250 nA.

Referring to fig. 30-31 (graphs 3000 and 3100, respectively), for case 1-PA and using a pulse with a width of 3.2ns at 125 nA.

Referring to fig. 32-33 (graphs 3200 and 3300, respectively), a pulse having a width of 6.4ns at 62.5nA was used for case 1-PA.

Referring to fig. 34-35 (graphs 3400 and 3500, respectively), for case 1-PA and using a pulse with a width of 12.8ns at 31.25 nA.

Referring to fig. 36-37 (graphs 3600 and 3700, respectively), for case 1-PA and using a pulse with a width of 25.6ns at 15.625 nA.

Reference is made to fig. 38-40 (graphs 3800, 3900, and 4000, respectively) for cases 1-PI and using a pulse width of 5.8ns, respectively.

It should be understood that although the present disclosure includes detailed descriptions with respect to cloud computing, embodiments of the teachings recited herein are not limited to cloud computing environments. Rather, embodiments of the invention can be implemented in connection with any other type of computing environment now known or later developed.

Cloud computing is a service delivery model for convenient, on-demand network access to a shared pool of configurable computing resources. Configurable computing resources are resources that can be deployed and released quickly with minimal administrative cost or interaction with a service provider, such as networks, network bandwidth, servers, processing, memory, storage, applications, virtual machines, and services. Such a cloud model may include at least five features, at least three service models, and at least four deployment models.

Is characterized by comprising the following steps:

self-service on demand: consumers of the cloud are able to unilaterally automatically deploy computing capabilities such as server time and network storage on demand without human interaction with the service provider.

Wide network access: computing power may be acquired over a network through standard mechanisms that facilitate the use of the cloud through heterogeneous thin or thick client platforms (e.g., mobile phones, laptops, Personal Digital Assistants (PDAs)).

Resource pool: the provider's computing resources are relegated to a resource pool and serve multiple consumers through a multi-tenant (multi-tenant) model, where different physical and virtual resources are dynamically allocated and reallocated as needed. Typically, the customer has no control or even knowledge of the exact location of the resources provided, but can specify the location at a higher level of abstraction (e.g., country, state, or data center), and thus has location independence.

Quick elasticity: computing power can be deployed quickly, flexibly (and sometimes automatically) to enable rapid expansion, and quickly released to shrink quickly. The computing power available for deployment tends to appear unlimited to consumers and can be available in any amount at any time.

Measurable service: cloud systems automatically control and optimize resource utility by utilizing some level of abstraction of metering capabilities appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled and reported, providing transparency for both service providers and consumers.

The service model is as follows:

software as a service (SaaS): the capability provided to the consumer is to use the provider's applications running on the cloud infrastructure. Applications may be accessed from various client devices through a thin client interface (e.g., web-based email) such as a web browser. The consumer does not manage nor control the underlying cloud infrastructure including networks, servers, operating systems, storage, or even individual application capabilities, except for limited user-specific application configuration settings.

Platform as a service (PaaS): the ability provided to the consumer is to deploy consumer-created or acquired applications on the cloud infrastructure, which are created using programming languages and tools supported by the provider. The consumer does not manage or control the underlying cloud infrastructure, including networks, servers, operating systems, or storage, but has control over the applications that are deployed, and possibly also the application hosting environment configuration.

Infrastructure as a service (IaaS): the capabilities provided to the consumer are the processing, storage, network, and other underlying computing resources in which the consumer can deploy and run any software, including operating systems and applications. The consumer does not manage nor control the underlying cloud infrastructure, but has control over the operating system, storage, and applications deployed thereto, and may have limited control over selected network components (e.g., host firewalls).

The deployment model is as follows:

private cloud: the cloud infrastructure operates solely for an organization. The cloud infrastructure may be managed by the organization or a third party and may exist inside or outside the organization.

Community cloud: the cloud infrastructure is shared by several organizations and supports a specific community of common interest relationships, such as mission missions, security requirements, policy and compliance considerations. A community cloud may be managed by multiple organizations or third parties within a community and may exist within or outside of the community.

Public cloud: the cloud infrastructure is offered to the public or large industry groups and owned by organizations that sell cloud services.

Mixing cloud: the cloud infrastructure consists of two or more clouds (private, community, or public) of deployment models that remain unique entities but are bound together by standardized or proprietary technologies that enable data and application portability (e.g., cloud bursting traffic sharing technology for load balancing between clouds).

Cloud computing environments are service-oriented with features focused on statelessness, low coupling, modularity, and semantic interoperability. At the heart of cloud computing is an infrastructure that contains a network of interconnected nodes.

Referring now to FIG. 41, an illustrative cloud computing environment 4150 is depicted. As shown, cloud computing environment 4150 includes one or more cloud computing nodes 4110 with which local computing devices used by cloud consumers may communicate, such as Personal Digital Assistant (PDA) or cellular telephone 4154A, desktop computer 4154B, laptop computer 4154C, automobile computer system 4154N, and/or automobile computer system 4154N. Nodes 4110 may communicate with each other. Cloud computing nodes 4110 may be physically or virtually grouped (not shown) in one or more networks including, but not limited to, private, community, public, or hybrid clouds, or a combination thereof, as described above. In this way, a customer of the cloud can request infrastructure as a service (IaaS), platform as a service (PaaS), and/or software as a service (SaaS) provided by the cloud computing environment 4150 without maintaining resources on the local computing device. It should be appreciated that the types of computing devices 4154A-N shown in fig. 41 are merely illustrative and that cloud computing node 4110 and cloud computing environment 4150 may communicate with any type of computing device over any type of network and/or network addressable connection (e.g., using a web browser).

Referring now to FIG. 42, a set of functional abstraction layers provided by cloud computing environment 4150 (FIG. 41) is illustrated. It should be understood at the outset that the components, layers, and functions illustrated in FIG. 42 are illustrative only and that embodiments of the present invention are not limited thereto. As shown, the following layers and corresponding functions are provided:

the hardware and software layer 4260 includes hardware and software components. Examples of hardware components include: mainframe 4261; a RISC (reduced instruction set computer) architecture-based server 4262; a server 4263; a blade server 4264; a storage device 4265; in some embodiments, the software components include web application server software 4267 and database software 4268.

The virtualization layer 4270 provides an abstraction layer from which the following examples of virtual entities may be provided: a virtual server 4271; virtual storage 4272; virtual network 4273 (including virtual private network); virtual applications and operating system 4274; and virtual client 4275.

In one example, the management layer 4280 may provide the functions described below: resource provisioning function 4281: dynamic acquisition of computing resources and other resources for performing tasks in a cloud computing environment is provided. Metering and pricing function 4282: cost tracking of resource usage and billing and invoicing therefor is performed within a cloud computing environment. In one example, the resource may include an application software license. The safety function is as follows: identity authentication is provided for cloud consumers and tasks, and protection is provided for data and other resources. User portal function 4283: access to the cloud computing environment is provided for consumers and system administrators. Service level management function 4284: allocation and management of cloud computing resources is provided to meet the requisite level of service. Service Level Agreement (SLA) planning and fulfillment function 4285: the future demand for cloud computing resources predicted according to the SLA is prearranged and provisioned.

Workload layer 4290 provides an example of the functionality that a cloud computing environment may implement. In this layer, examples of workloads or functions that can be provided include: mapping and navigation 4291; 4292 software development and lifecycle management; virtual classroom education offers 4293; data analysis processing 4294; transaction processing 4295; and the CMOS synapse array 4296 may be updated independent of the cell location's line-dependent linear weighting.

The present invention may be a system, method and/or computer program product in any combination of possible technical details. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied therewith for causing a processor to implement various aspects of the present invention.

The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.

The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.

Computer program instructions for carrying out operations of the present invention may be assembly instructions, Instruction Set Architecture (ISA) instructions, machine related instructions, microcode, firmware instructions, state setting data, integrated circuit configuration data, or source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present invention are implemented by personalizing an electronic circuit, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA), with state information of computer-readable program instructions, which can execute the computer-readable program instructions.

Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.

These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.

The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.

The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.

Reference in the specification to "one embodiment" or "an embodiment" of the present invention, and other variations thereof, means that a particular feature, structure, characteristic, and so forth described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment.

It will be appreciated that the use of any of the following "/", "and/or" and "at least one" is intended to encompass the selection of only the first listed option (a) or the selection of only the second listed option (B), or the selection of both options (a and B), for example, in the case of "a/B", "a and/or B" and "at least one of a and B". As another example, in the case of "A, B and/or C" and "at least one of A, B and C," such phrasing is intended to encompass the selection of only the first listed option (a), or only the second listed option (B), or only the third listed option (C), or only the first and second listed options (a and B), or only the first and third listed options (a and C), or only the second and third listed options (B and C), or all three options (a and B and C). This can be extended for many of the items listed, as will be readily apparent to those of ordinary skill in the art.

Having described preferred embodiments for systems and methods (which are intended to be illustrative and not limiting), it is noted that modifications and variations can be made by persons skilled in the art in light of the above teachings. It is therefore to be understood that changes may be made in the particular embodiments disclosed which are within the scope of the invention as outlined by the appended claims. Having thus described aspects of the invention, with the details and particularity required by the patent laws, what is claimed and desired protected by letters patent is set forth in the appended claims.

40页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:用于基于新抗原的免疫治疗的靶向抗原表位的方法和系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类