Solid-state imaging device and distance measuring device

文档序号:1895375 发布日期:2021-11-26 浏览:10次 中文

阅读说明:本技术 固态摄像装置和测距装置 (Solid-state imaging device and distance measuring device ) 是由 横川创造 森山祐介 河合信宏 寄门雄飞 古闲史彦 蛯子芳树 远藤表德 若林准人 于 2020-04-27 设计创作,主要内容包括:根据本发明,提高了测距精度。根据实施方案的固态摄像装置包括像素阵列部(101),多个像素(20-1)以矩阵状布置在该像素阵列部(101)中,其中,各个所述像素包括:多个光电转换单元(211,212),其对入射光进行光电转换以产生电荷;浮动扩散区域(27),其中累积电荷;多个传输电路(23,24,25),其将所述多个光电转换单元中的各者中产生的电荷传输到所述浮动扩散区域;和第一晶体管(28),其使具有与累积在所述浮动扩散区域中的所述电荷的电荷量相对应的电压值的像素信号出现在信号线上。(According to the invention, the distance measurement precision is improved. A solid-state image pickup device according to an embodiment includes a pixel array section (101) in which a plurality of pixels (20-1) are arranged in a matrix shape, wherein each of the pixels includes: a plurality of photoelectric conversion units (211,212) that photoelectrically convert incident light to generate electric charges; a floating diffusion region (27) in which charge is accumulated; a plurality of transfer circuits (23,24,25) that transfer the electric charges generated in each of the plurality of photoelectric conversion units to the floating diffusion region; and a first transistor (28) that causes a pixel signal having a voltage value corresponding to the charge amount of the charges accumulated in the floating diffusion region to appear on a signal line.)

1. A solid-state image pickup device, comprising:

a pixel array section in which a plurality of pixels are arranged in a matrix, wherein,

each of the pixels includes:

a plurality of photoelectric conversion units each of which photoelectrically converts incident light to generate electric charges;

a floating diffusion region that accumulates charge;

a plurality of transfer circuits that transfer charges generated in each of the plurality of photoelectric conversion units to the floating diffusion region; and

a first transistor that causes a pixel signal having a voltage value corresponding to an amount of charge of the charges accumulated in the floating diffusion region to appear on a signal line.

2. The solid-state image pickup device according to claim 1,

each of the plurality of pixels is arranged in a pixel region individually allocated on the first surface of the semiconductor substrate,

the plurality of transmission circuits includes:

a plurality of first transmission circuits arranged in point symmetry or line symmetry with respect to a center of the pixel region or with a straight line passing through the center as an axis; and

a plurality of second transmission circuits arranged in point symmetry or line symmetry with respect to the center or with the straight line as an axis; and is

Each of the photoelectric conversion units is provided one-to-one with respect to a combination of first and second transmission circuits arranged in a predetermined direction in the matrix-like arrangement.

3. The solid-state image pickup device according to claim 2, wherein each of the transfer circuits includes a second transistor having a vertical structure reaching the photoelectric conversion unit arranged in the semiconductor substrate from the first surface of the semiconductor substrate.

4. The solid-state image pickup device according to claim 2, further comprising:

a driving unit configured to drive transfer of the electric charges of the plurality of transfer circuits, wherein,

the driving unit drives the first transfer circuit and the second transfer circuit so that transfer timing of the electric charge via the first transfer circuit is different from transfer timing of the electric charge via the second transfer circuit.

5. The solid-state image pickup device according to claim 4,

the drive unit

A first drive pulse having a first phase angle with respect to a pulse of a predetermined period and having the predetermined period is input to the first transmission circuit, and,

a second drive pulse shifted in phase by 180 ° from the first drive pulse is input to the second transfer circuit.

6. The solid-state image pickup device according to claim 5,

the drive unit

The plurality of first transmission circuits are driven with the same phase, and

the plurality of second transmission circuits are driven with the same phase.

7. The solid-state image pickup device according to claim 6, wherein the plurality of first transmission circuits and the plurality of second transmission circuits are arranged in point symmetry or line symmetry with respect to the center or with the straight line as an axis.

8. The solid-state image pickup device according to claim 6,

the plurality of transmission circuits further include a plurality of third transmission circuits and a plurality of fourth transmission circuits, and

the drive unit

Inputting a third drive pulse shifted in phase by 90 ° with respect to the first drive pulse to each of the plurality of third transfer circuits, and driving the third drive pulse with the same phase, and,

a fourth drive pulse shifted in phase by 180 ° with respect to the third drive pulse is input to each of the plurality of fourth transmission circuits, and the fourth drive pulse is driven with the same phase.

9. The solid-state image pickup device according to claim 8,

the first drive pulse has the first phase angle of 0 ° with respect to the pulse of the predetermined period,

the second drive pulse has a second phase angle of 180 deg. with respect to the predetermined periodic pulse,

the third drive pulse has a third phase angle of 90 DEG with respect to the pulse of the predetermined period, and

the fourth drive pulse has a fourth phase angle of 270 ° with respect to the predetermined periodic pulses.

10. The solid-state image pickup device according to claim 8, wherein the plurality of first transmission circuits, the plurality of second transmission circuits, the plurality of third transmission circuits, and the plurality of fourth transmission circuits are arranged in point symmetry or line symmetry with respect to the center or with the straight line as an axis.

11. The solid-state image pickup device according to claim 8,

each of the transfer circuits includes a memory that holds the electric charge generated in the photoelectric conversion unit, and

the drive unit

Inputting a first drive pulse having a phase angle of 0 ° with respect to the pulse of the predetermined period and having the predetermined period to the plurality of first transfer circuits to accumulate the electric charges in the memory of each of the plurality of first transfer circuits,

inputting a second drive pulse having a phase angle of 180 ° with respect to the pulse of the predetermined period and having the predetermined period to the plurality of second transfer circuits to accumulate the electric charges in the memory of each of the plurality of second transfer circuits,

inputting a third drive pulse having a phase angle of 90 ° with respect to the pulse of the predetermined period and having the predetermined period to the plurality of third transfer circuits to accumulate the electric charges in the memory of each of the plurality of third transfer circuits, and

inputting a fourth drive pulse having a phase angle of 270 ° with respect to the pulse of the predetermined period and having the predetermined period to the plurality of fourth transmission circuits to accumulate the electric charges in the memory of each of the plurality of fourth transmission circuits.

12. The solid-state image pickup device according to claim 8, further comprising a signal processing unit that generates distance information based on a ratio of a difference between the electric charge transferred via the first transfer circuit and the electric charge transferred via the second transfer circuit to a difference between the electric charge transferred via the third transfer circuit and the electric charge transferred via the fourth transfer circuit.

13. The solid-state image pickup device according to claim 2, wherein each of the pixels further includes a third transistor that drains the electric charge generated in the photoelectric conversion unit.

14. The solid-state image pickup device according to claim 13, wherein the third transistor has a vertical structure reaching the photoelectric conversion unit arranged in the semiconductor substrate from the first surface of the semiconductor substrate.

15. The solid-state image pickup device according to claim 5,

the driving unit divides the charges generated in the respective photoelectric conversion units into a plurality of accumulation periods and transfers the divided charges to the floating diffusion region, and

the driving unit inverts a phase of each of the first driving pulse and the second driving pulse for each of the accumulation periods.

16. The solid-state image pickup device according to claim 15,

each of the pixels further includes a third transistor that discharges electric charge generated in the photoelectric conversion unit,

the drive unit sets a non-accumulation period in which the electric charges generated in the respective photoelectric conversion units are not transferred to the floating diffusion region in the accumulation period, and

the driving unit discharges the electric charge generated in the photoelectric conversion unit via the third transistor in the non-accumulation period.

17. The solid-state image pickup device according to claim 2, further comprising a pixel separation section that is provided along a boundary portion of the pixel region and optically separates the adjacent pixels from each other.

18. The solid-state image pickup device according to claim 17, further comprising an element separation portion that is provided in at least a part between the plurality of photoelectric conversion elements in the pixel region and optically separates the adjacent photoelectric conversion elements from each other.

19. The solid-state image pickup device according to claim 1, wherein a periodic concave-convex structure is provided on a light receiving surface of each of the photoelectric conversion units.

20. A ranging device, comprising:

a light receiving unit including a pixel array section in which a plurality of pixels are arranged in a matrix; and

a light emitting unit that emits pulse-like irradiation light of a predetermined period, wherein,

each of the pixels includes:

a plurality of photoelectric conversion units each of which photoelectrically converts incident light to generate electric charges;

a floating diffusion region that accumulates charge;

a plurality of transfer circuits that transfer charges generated in each of the plurality of photoelectric conversion units to the floating diffusion region; and

a first transistor that causes a pixel signal having a voltage value corresponding to an amount of charge of the charges accumulated in the floating diffusion region to appear on a signal line.

Technical Field

The present disclosure relates to a solid-state image pickup device and a distance measuring device.

Background

In the prior art, a ranging sensor (hereinafter referred to as an indirect ToF sensor) using an indirect ToF (time of flight) method is known. In the indirect ToF sensor, the distance to an object is measured based on a signal charge obtained by emitting light from a light source and receiving reflected light at a specific phase.

Reference list

Patent document

Patent document 1: JP 2019-4149A

Disclosure of Invention

Technical problem

As a pixel architecture of the indirect ToF sensor, a 2-tap type pixel architecture (2-tap type pixel architecture) in which one pixel has two memories is common. In the 2-tap type pixel architecture, a distance image representing a distance to an object is generated based on a ratio of charges accumulated in each of two memories of each pixel.

Here, there is generally a difference in the characteristics of the two memories included in each pixel. This characteristic difference causes an individual difference in the amount of charge accumulated in the memory of each pixel, thereby causing a problem that the ranging accuracy of the indirect ToF sensor is lowered.

Accordingly, the present disclosure proposes a solid-state image pickup device and a ranging method capable of improving ranging accuracy.

Drawings

Fig. 1 is a block diagram showing a schematic configuration example of a ToF sensor as a distance measuring device according to a first embodiment.

Fig. 2 is a block diagram showing a schematic configuration example of a solid-state image pickup device serving as a light receiving unit according to the first embodiment.

Fig. 3 is a circuit diagram showing an example of a circuit configuration of a unit pixel as a base of the unit pixel according to the first embodiment.

Fig. 4 is a plan view showing a layout example of the unit pixel illustrated in fig. 3.

Fig. 5 is a diagram (part 1) for explaining an outline of a ranging method based on the indirect ToF method.

Fig. 6 is a diagram (part 2) for explaining an outline of a ranging method based on the indirect ToF method.

Fig. 7 is a circuit diagram showing an example of a circuit configuration of a unit pixel according to a first configuration example of the first embodiment.

Fig. 8 is a plan view showing an example of a planar layout of a unit pixel according to a first configuration example of the first embodiment.

Fig. 9 is a circuit diagram showing an example of a circuit configuration of a unit pixel according to a second example of the configuration of the first embodiment.

Fig. 10 is a plan view showing an example of a planar layout of a unit pixel according to a second configuration example of the first embodiment.

Fig. 11 is a plan view showing an example of a planar layout of a unit pixel according to a third configuration example of the first embodiment.

Fig. 12 is a circuit diagram showing an example of a circuit configuration of a unit pixel according to a fourth configuration example of the first embodiment.

Fig. 13 is a plan view showing an example of a planar layout of a unit pixel according to a fourth configuration example of the first embodiment.

Fig. 14 is a circuit diagram showing an example of a circuit configuration of a unit pixel according to a fifth configuration example of the first embodiment.

Fig. 15 is a circuit diagram showing an example of a circuit configuration of a unit pixel according to a sixth configuration example of the first embodiment.

Fig. 16 is a circuit diagram showing a circuit configuration example of a unit pixel according to a seventh configuration example of the first embodiment.

Fig. 17 is a plan view showing a planar layout example of a pixel separating section according to a first layout example of the first embodiment.

Fig. 18 is a plan view showing a planar layout example of a pixel separating section according to a second layout example of the first embodiment.

Fig. 19 is a plan view showing a planar layout example of a pixel separating section according to a third layout example of the first embodiment.

Fig. 20 is a sectional view taken along line I-I showing a sectional structure example of a unit pixel according to a first sectional structure example of the first embodiment.

Fig. 21 is a sectional view taken along line II-II showing a sectional structure example of a unit pixel according to a first sectional structure example of the first embodiment.

Fig. 22 is a cross-sectional view taken along line I-I showing a cross-sectional structure example of a unit pixel according to a second cross-sectional structure example of the first embodiment.

Fig. 23 is a sectional view taken along line II-II showing a sectional structure example of a unit pixel according to a second sectional structure example of the first embodiment.

Fig. 24 is a cross-sectional view taken along line I-I showing an example of a cross-sectional structure of a unit pixel according to a third example of a cross-sectional structure of the first embodiment.

Fig. 25 is a sectional view taken along line II-II showing a sectional structure example of a unit pixel according to a third sectional structure example of the first embodiment.

Fig. 26 is a sectional view taken along line I-I showing a sectional structure example of a unit pixel according to a fourth sectional structure example of the first embodiment.

Fig. 27 is a sectional view taken along line II-II showing a sectional structure example of a unit pixel according to a fourth sectional structure example of the first embodiment.

Fig. 28 is a schematic diagram showing a plan layout example of a memory according to a first variation of the first embodiment.

Fig. 29 is a schematic diagram showing a plan layout example of a memory according to a second variation of the first embodiment.

Fig. 30 is a schematic diagram showing a plan layout example of a memory according to a third variation of the first embodiment.

Fig. 31 is a schematic diagram showing a plan layout example of a memory according to a fourth variation of the first embodiment.

Fig. 32 is a schematic diagram showing a plan layout example of a memory according to a fifth variation of the first embodiment.

Fig. 33 is a schematic diagram showing a plan layout example of a memory according to a sixth variation of the first embodiment.

Fig. 34 is a schematic diagram showing a plan layout example of a memory according to a seventh variation of the first embodiment.

Fig. 35 is a schematic diagram showing a plan layout example of a memory according to an eighth variation of the first embodiment.

Fig. 36 is a schematic diagram showing a plan layout example of a memory according to a ninth variation of the first embodiment.

Fig. 37 is a schematic diagram showing a plan layout example of a memory according to a tenth variation of the first embodiment.

Fig. 38 is a schematic diagram showing a plan layout example of a memory according to an eleventh variation of the first embodiment.

Fig. 39 is a schematic diagram showing a plan layout example of a memory according to a twelfth variation of the first embodiment.

Fig. 40 is a schematic diagram showing a plan layout example of a memory according to a thirteenth variation of the first embodiment.

Fig. 41 is a diagram showing a plan layout example of a memory according to a fourteenth variation of the first embodiment.

Fig. 42 is a schematic diagram showing a plan layout example of a memory according to a fifteenth modification of the first embodiment.

Fig. 43 is a schematic diagram showing a plan layout example of a memory according to a sixteenth modification of the first embodiment.

Fig. 44 is a diagram for explaining a difference in the amount of accumulated charge between memories generated in the comparative example.

Fig. 45 is a diagram for explaining the effect of eliminating the characteristic difference of the respective memories according to the first embodiment.

Fig. 46 is a timing chart showing a readout operation of a depth frame in the case of using a unit pixel having no FD sharing structure according to the first embodiment.

Fig. 47 is a timing chart showing a readout operation of a depth frame in the case of using a unit pixel having an FD sharing structure (for example, unit pixels according to the first to third configuration examples described above) according to the first embodiment.

Fig. 48 is a timing chart showing a readout operation of a depth frame in the case of using a unit pixel having an FD sharing structure (for example, a unit pixel according to the fourth configuration example described above) according to the first embodiment.

Fig. 49 is a waveform diagram for explaining an example of the first drive pulse of the first embodiment.

Fig. 50 is a schematic diagram showing an example of the connection relationship according to a modification of the first embodiment.

Fig. 51 is a schematic diagram showing another example of the connection relationship according to a modification of the first embodiment.

Fig. 52 is a schematic diagram showing still another example of the connection relationship according to a modification of the first embodiment.

Fig. 53 is a schematic diagram showing still another example of the connection relationship according to a modification of the first embodiment.

Fig. 54 is a schematic diagram showing still another example of the connection relationship according to a modification of the first embodiment.

Fig. 55 is a schematic diagram showing still another example of the connection relationship according to a modification of the first embodiment.

Fig. 56 is a waveform diagram for explaining an example of the second drive pulse of the first embodiment.

Fig. 57 is a diagram for explaining noise generated by background light as interference light.

Fig. 58 is a diagram for explaining a case where reflected light (interference light) from another ToF sensor is incident in the non-accumulation period.

Fig. 59 is a diagram for explaining a case where reflected light (interference light) from another ToF sensor is incident in the accumulation period.

Fig. 60 is a diagram for explaining noise cancellation according to the first embodiment in the case where the modulation frequency of interference light from another ToF sensor is different from the modulation frequency of its own irradiation light.

Fig. 61 is a diagram for explaining noise cancellation according to the first embodiment in the case where the modulation frequency of interference light from another ToF sensor is the same as the modulation frequency of its own irradiation light.

Fig. 62 is a diagram for explaining noise cancellation according to the first embodiment in the case where the modulation frequency and phase of interference light from another ToF sensor are the same as those of its own irradiation light.

Fig. 63 is a waveform diagram showing a case where the ToF sensor and the object contact each other.

Fig. 64 is a waveform diagram showing a case where the ToF sensor and the object are separated from each other.

Fig. 65 is a waveform diagram (2 tap type) for explaining an example of the noise canceling operation at the time of phase switching according to the first embodiment.

Fig. 66 is a waveform diagram for explaining an example of a noise canceling operation at the time of phase switching according to a modification of the first embodiment.

Fig. 67 is a waveform diagram (multi-tap type) for explaining an example of the noise canceling operation at the time of phase switching according to the first embodiment.

Fig. 68 is a circuit diagram showing an example of a circuit configuration of a unit pixel according to a first example of a configuration of the second embodiment.

Fig. 69 is a circuit diagram showing an example of a circuit configuration of a unit pixel according to a second example of the configuration of the second embodiment.

Fig. 70 is a circuit diagram showing an example of a circuit configuration of a unit pixel according to a third configuration example of the second embodiment.

Fig. 71 is a diagram showing an outline of a configuration example of a non-stacked solid-state imaging device to which the technique according to the present disclosure can be applied.

Fig. 72 is a diagram (part 1) showing an outline of a configuration example of a stacked solid-state imaging device to which the technique according to the present disclosure can be applied.

Fig. 73 is a diagram (part 2) showing an outline of a configuration example of a stacked solid-state imaging device to which the technique according to the present disclosure can be applied.

Fig. 74 is a schematic diagram (front) showing an example of an electronic device to which the technique according to the present disclosure can be applied.

Fig. 75 is a schematic diagram (rear) showing an example of an electronic device to which the technique according to the present disclosure can be applied.

Fig. 76 is a schematic diagram showing a case where the technique according to the present disclosure can be applied.

Fig. 77 is a block diagram showing an example of a schematic configuration of the vehicle control system.

Fig. 78 is an explanatory diagram showing an example of the mounting positions of the vehicle exterior information detection unit and the imaging unit.

Detailed Description

Hereinafter, embodiments of the present disclosure will be described in detail with reference to the accompanying drawings. Note that, in the following embodiments, the same portions are denoted by the same reference numerals, and thus, duplicate description will be omitted.

In addition, the present disclosure will be explained in the following sequence of items.

1. First embodiment

1.1 distance measuring device (ToF sensor)

1.2 example of the construction of solid-state imaging device

1.3 basic constitution example of Unit Pixel

1.4 basic layout example of Unit Pixel

1.5 overview of the Indirect ToF method

1.6 example of the constitution of the Unit Pixel

1.6.1 first constitutional example

1.6.2 second constitutional example

1.6.3 third constitutional example

1.6.4 fourth constitution example

1.6.5 fifth constitutional example

1.6.6 sixth configuration example

1.6.7 seventh construction example

1.7 Pixel separation Structure

1.7.1 first layout example

1.7.2 second layout example

1.7.3 third layout example

1.8 example of the Cross-sectional Structure of the Unit Pixel

1.8.1 first section Structure example

1.8.2 second Cross-section Structure example

1.8.3 example of third Cross-section Structure

1.8.4 example of fourth cross-section structure

1.9FD shared layout

1.9.1 first variant

1.9.2 second variant

1.9.3 third variant

1.9.4 fourth variant

1.9.5 fifth variant

1.9.6 sixth variant

1.9.7 seventh variation

1.9.8 eighth variant

1.9.9 ninth variant

1.9.10 tenth variation

1.9.11 eleventh variant

1.9.12 twelfth variant

1.9.13 thirteenth variant

1.9.14 fourteenth variant

1.9.15 fifteenth variant

1.9.16 sixteenth variant

1.10 Elimination of characteristic Difference

1.11 example of read-out operation of a range image (depth frame)

1.12 drive pulse example

1.12.1 first drive pulse example

1.12.1.1 modification

1.12.2 second drive pulse example

1.13 encoding of accumulation periods

1.13.1 noise caused by interference

1.13.1.1 interference caused by background light

1.13.1.2 interference from another ToF sensor

1.13.1.2.1 in the non-accumulation period, when the reflected light from another ToF sensor is incident

1.13.1.2.2 in the accumulation period, when the reflected light from another ToF sensor is incident

1.13.2 eliminating noise caused by interference

1.13.2.1 example of eliminating noise by encoding accumulation periods

1.13.2.1.1 case where the modulation frequency of the interference light from another ToF sensor is different from the modulation frequency of its own illumination light

1.13.2.1.2 case where the modulation frequency of the interference light from another ToF sensor is the same as the modulation frequency of its own illumination light

1.13.2.1.3 case where the modulation frequency and phase of the interference light from another ToF sensor are the same as those of its own irradiation light

1.13.3 noise generated during phase switching

1.13.3.1 example of noise canceling operation at phase switching (in case of 2 tap type)

1.13.3.2 modified example of noise cancellation operation at phase switching

1.13.3.3 modified example of noise canceling operation at the time of phase switching (in the case of a multi-tap type of 3 or more taps)

1.14 action and Effect

2. Second embodiment

2.1 first constitutional example

2.2 second constitutional example

2.3 third constitutional example

3. Configuration example of stacked solid-state imaging device to which technology according to the present disclosure can be applied

4. Examples of electronic devices to which techniques according to this disclosure can be applied

5. Various application examples

6. Application example of Mobile body

1. First embodiment

First, the first embodiment will be described in detail below with reference to the accompanying drawings. Note that in the first embodiment, for example, a solid-state image pickup device and a distance measuring device that measure a distance to an object using an indirect ToF method will be exemplified.

The solid-state image pickup device and the distance measuring device according to the present embodiment and the following exemplary embodiments can be applied to, for example, the following apparatuses: an on-vehicle system that is mounted on a vehicle and that measures a distance to an object outside the vehicle; a gesture recognition system that measures a distance to an object (e.g., a hand of a user, etc.) and recognizes a gesture of the user based on the measurement result, and the like. In this case, the result of the gesture recognition can also be used for the operation of a car navigation system, for example.

1.1 distance measuring device (ToF sensor)

Fig. 1 is a block diagram showing a schematic configuration example of a ToF sensor as a distance measuring device according to a first embodiment. As shown in fig. 1, the ToF sensor 1 includes a control unit 11, a light emitting unit 13, a light receiving unit 14, a calculation unit 15, and an external interface (I/F) 19.

The control unit 11 includes, for example, an information processing apparatus such as a Central Processing Unit (CPU), and controls each unit of the ToF sensor 1.

The external I/F19 may be, for example, a communication adapter for establishing communication with the external host 80 via a communication network conforming to any standard, such as CAN (controller area network), LIN (local interconnect network), FlexRay (registered trademark), MIPI (mobile industry processor interface), LVDS (low voltage differential signaling), or the like, in addition to a wireless LAN (local area network) or a wired LAN.

Here, for example, when the ToF sensor 1 is mounted on an automobile or the like, the host 80 may be an Engine Control Unit (ECU) mounted on the automobile or the like. Further, in the case where the ToF sensor 1 is mounted on an autonomous mobile robot (e.g., a domestic pet robot) or an autonomous moving body (e.g., a sweeping robot, an unmanned aerial vehicle, or a following carrier robot), the host computer 80 may be a control device or the like that controls the autonomous moving body. Further, in the case where ToF sensor 1 is mounted on an electronic device such as a mobile phone, a smartphone, or a tablet terminal, host 80 may be a CPU included in the electronic device, a server (including a cloud server or the like) connected to the electronic device via a network, or the like.

The light emitting unit 13 includes, for example, one or more semiconductor laser diodes as a light source, and the light emitting unit 13 emits pulsed laser light (hereinafter referred to as irradiation light) L1 having a predetermined time width at a predetermined cycle (also referred to as a light emitting cycle). The light emitting unit 13 emits the irradiation light L1 at least toward an angular range equal to or larger than the viewing angle of the light receiving unit 14. In addition, the light emitting unit 13 emits the irradiation light L1 having a time width of several ns (nanoseconds) to 5ns at a period of 100MHz (megahertz), for example. For example, in the case where the object 90 is present within the ranging range, the irradiation light L1 emitted from the light emitting unit 13 is reflected by the object 90 and is incident on the light receiving unit 14 as reflected light L2.

Although details will be described later, the light receiving unit 14 includes, for example, a plurality of pixels arranged in a two-dimensional lattice pattern, and the light receiving unit 14 outputs a signal intensity (hereinafter also referred to as a pixel signal) detected in each pixel after the light emitting unit 13 emits light.

The calculation unit 15 generates a depth image within the angle of view of the light receiving unit 14 based on the pixel signal output from the light receiving unit 14. At this time, the calculation unit 15 may perform predetermined processing such as noise removal on the generated depth image. For example, the depth image generated by the calculation unit 15 can be output to the host computer 80 or the like via the external I/F19.

1.2 example of the construction of solid-state imaging device

Fig. 2 is a block diagram showing a schematic configuration example of a solid-state image pickup device as a light receiving unit according to the first embodiment.

The solid-state image pickup device 100 shown in fig. 2 is a back-illuminated indirect ToF sensor, and is provided in a distance measuring device having a distance measuring function.

The solid-state image pickup device 100 includes a pixel array section 101 and a peripheral circuit. The peripheral circuits may include, for example, a vertical drive circuit 103, a column processing circuit 104, a horizontal drive circuit 105, and a system control unit 102.

The solid-state image pickup device 100 further includes a signal processing unit 106 and a data storage unit 107. Note that the signal processing unit 106 and the data storage unit 107 may be mounted on the same substrate as the solid-state image pickup device 100, or may be arranged on a different substrate from the solid-state image pickup device 100 in the distance measuring device.

The pixel array section 101 has the following configuration: among them, pixels (hereinafter also referred to as unit pixels) 20 are arranged in a row direction and a column direction, that is, in a matrix (also referred to as a two-dimensional lattice shape), the pixels 20 generating charges corresponding to the amount of received light and outputting signals corresponding to the charges.

Here, the row direction refers to an arrangement direction (lateral direction in the drawing) of the unit pixels 20 in the pixel row, and the column direction refers to an arrangement direction (longitudinal direction in the drawing) of the unit pixels 20 in the pixel column.

In the pixel array section 101, for a pixel array in a matrix form, pixel drive lines LD are arranged in a row direction for each pixel row, and two vertical signal lines VSL are arranged in a column direction for each pixel column. The pixel driving line LD transmits a driving signal for driving when reading out a signal from the unit pixel 20. Note that, in fig. 2, although the pixel driving line LD is illustrated as one wiring, it is not limited to one. One end of the pixel driving line LD is connected to an output terminal of the vertical driving circuit 103 corresponding to each row.

The vertical drive circuit 103 includes a shift register, an address decoder, and the like, and drives each unit pixel 20 of the pixel array section 101 simultaneously for all pixels or drives each unit pixel 20 of the pixel array section 101 in units of rows. That is, the vertical drive circuit 103 constitutes a drive unit together with the system control unit 102 that controls the vertical drive circuit 103, the drive unit controlling the operation of each unit pixel 20 of the pixel array section 101.

Note that in the ranging employing the indirect ToF method, the number of elements to be driven at high speed connected to one pixel drive line LD affects controllability of high-speed driving and driving accuracy. Here, in most cases, the pixel array section of the solid-state image pickup device for performing distance measurement by the indirect ToF method is a rectangular region that is long in the row direction. Therefore, in this case, as the pixel driving line LD which is an element to be driven at high speed, the vertical signal line VSL or other control line extending in the column direction may be used. With such a configuration, for example, a plurality of unit pixels 20 arranged in the column direction are connected to the vertical signal line VSL and other control lines extending in the column direction, and driving of the unit pixels 20, that is, driving of the solid-state image pickup device 100 is performed by the driving unit and the horizontal driving circuit 105 or the like provided separately from the vertical driving circuit 103 through the vertical signal line VSL or other control lines.

Signals output from the respective unit pixels 20 of the pixel row in accordance with drive control of the vertical drive circuit 103 are input to the column processing circuit 104 through the vertical signal line VSL. The column processing circuit 104 performs predetermined signal processing on the signal output from each unit pixel 20 through the vertical signal line VSL, and temporarily holds the pixel signal after the signal processing.

Specifically, the column processing circuit 104 performs noise removal processing, analog-to-digital (AD) conversion processing, and the like as signal processing.

The horizontal drive circuit 105 includes a shift register, an address decoder, and the like, and sequentially selects unit circuits of the column processing circuit 104 corresponding to pixel columns. The pixel signals subjected to signal processing for each unit circuit in the column processing circuit 104 are sequentially output by selective scanning by the horizontal drive circuit 105.

The system control unit 102 includes a timing generator for generating various timing signals and the like, and performs drive control of the vertical drive circuit 103, the column processing circuit 104, the horizontal drive circuit 105, and the like based on the various timing signals generated by the timing generator.

The signal processing unit 106 has at least an arithmetic processing function, performs various types of signal processing such as arithmetic processing based on the pixel signal output from the column processing circuit 104, and outputs the distance information of each pixel thus calculated to the outside. The data storage unit 107 temporarily stores data necessary for signal processing in the signal processing unit 106.

1.3 basic constitution example of Unit Pixel

Here, a basic configuration example of the unit pixel 20 according to the present embodiment will be described using a circuit configuration of the unit pixel 920 as a base. Fig. 3 is a circuit diagram showing an example of a circuit configuration of a unit pixel as a base of the unit pixel according to the first embodiment.

As shown in fig. 3, the unit pixel 920 has a so-called 2-tap type circuit configuration including a photodiode 21, an OFG (over flow gate) transistor 22, and two readout circuits 920A and 920B. Note that the 2-tap type may be a configuration in which two transmission gate transistors (also referred to as taps) 23A and 23B are provided for one photodiode 21.

The photodiode 21 may be a photoelectric conversion element that photoelectrically converts incident light to generate electric charges. A source of the OFG transistor 22 is connected to a cathode of the photodiode 21. The drain of the OFG transistor 22 is connected to the power supply line VDD, for example. Further, the gate of the OFG transistor 22 is connected to the vertical driving circuit 103 via a pixel driving line LD (not shown).

The readout circuit 920A includes, for example, a transfer gate transistor 23A, a memory (also referred to as a tap) 24A, a transfer transistor 25A, a reset transistor 26A, an amplification transistor 28A, and a selection transistor 29A.

In this specification, the transfer gate transistor, the memory, and the transfer transistor in each readout circuit are also referred to as a transfer circuit that transfers the charge generated in the photodiode 21 to the floating diffusion region 27, for example.

The source of the transfer gate transistor 23A is connected to the cathode of the photodiode 21, and the drain is connected to the memory 24A.

The memory 24A is, for example, a MOS (metal-oxide-semiconductor) type memory including a transistor and a capacitor, and temporarily holds the charge flowing from the photodiode 21 via the transfer gate transistor 23A in the capacitor under the control of the vertical drive circuit 103.

The source of the transfer transistor 25A is connected to the memory 24A, the drain is connected to the gate of the amplification transistor 28A, and the gate is connected to the vertical drive circuit 103 via a pixel drive line LD (not shown).

A node connecting the drain of the transfer transistor 25A and the gate of the amplification transistor 28A forms a floating diffusion region (FD)27A, and the floating diffusion region (FD)27A converts the electric charges into a voltage having a voltage value corresponding to the amount of electric charges.

The source of the amplification transistor 28A is connected to the power supply line VDD, and the drain is connected to the vertical signal line VSLA via the selection transistor 29A. The amplification transistor 28A causes a voltage value of a voltage applied to the gate, that is, a voltage value corresponding to the amount of charge accumulated in the floating diffusion region 27A to appear as a pixel signal in the vertical signal line VSLA.

The selection transistor 29A has a source connected to the drain of the amplification transistor 28A, a drain connected to the vertical signal line VSLA, and a gate connected to the vertical driving circuit 103 via a pixel driving line LD (not shown). The selection transistor 29A causes a pixel signal having a voltage value corresponding to the amount of charge accumulated in the floating diffusion region 27A to appear in the vertical signal line VSLA under the control of the vertical driving circuit 103.

The source of the reset transistor 26A is connected to a node connecting the drain of the transfer transistor 25A and the gate of the amplification transistor 28A, i.e., to the floating diffusion region 27A. The reset transistor 26A has a drain connected to the power supply line VDD, and a gate connected to the vertical drive circuit 103 via a pixel drive line LD (not shown). The reset transistor 26A discharges the electric charges accumulated in the floating diffusion region 27A under the control of the vertical driving circuit 103. That is, the reset transistor 26A initializes (resets) the floating diffusion region 27A according to the control of the vertical drive circuit 103.

On the other hand, the readout circuit 920B similarly includes a transfer gate transistor 23B, a memory 24B, a transfer transistor 25B, a reset transistor 26B, an amplification transistor 28B, and a selection transistor 29B. The connection relationship and function of the respective circuit elements may be the same as those of the readout circuit 920A.

1.4 basic layout example of Unit Pixel

Fig. 4 is a plan view showing a layout example of the unit pixel shown in fig. 3. Note that fig. 4 shows a plan layout example of an element formation face of a semiconductor substrate on which the photodiode 21 of the unit pixel 920 is formed.

As shown in fig. 4, each unit pixel 920 has the following layout: here, when the element formation face of the semiconductor substrate is viewed from the vertical direction, the photodiode 21, the OFG transistor 22, and the two readout circuits 920A and 920B are all arranged in a rectangular region.

A rectangular region (hereinafter also referred to as a pixel region) on the element formation surface of the semiconductor substrate is assigned to each unit pixel 920. For example, the photodiode 21 is arranged at the center of the pixel region. The OFG transistors 22 are arranged on opposite two sides among four sides of the photodiode 21, and the transfer gate transistors 23A and 23B of the two readout circuits 920A and 920B are arranged on the remaining two sides.

The remaining circuit elements of each of the readout circuits 920A and 920B are arranged around the photodiode 21 to surround the photodiode 21. In this case, by arranging the memory 24A of the readout circuit 920A and the memory 24B of the readout circuit 920B to be point-symmetrical or line-symmetrical (hereinafter, referred to as "ensuring symmetry") with the photodiode 21 as a center, a characteristic difference between the two memories 24A and 24B can be reduced. Similarly, by arranging the remaining circuit elements of the readout circuit 920A and the remaining circuit elements of the readout circuit 920B in point symmetry or line symmetry with the photodiode 21 as the center, the characteristic difference between the readout circuits 920A and 920B can be reduced.

1.5 overview of the Indirect ToF method

Here, an outline of a ranging method using the indirect ToF method will be described. Fig. 5 and 6 are diagrams for explaining an outline of a ranging method using the indirect ToF method, respectively.

As shown in fig. 5, in the indirect ToF method, the light quantity Q of reflected light L2 having a phase angle (also referred to as a phase difference) of 0 ° with respect to irradiation light L1 emitted from the light-emitting unit 13 is detected by the light-receiving unit 140Light quantity Q of reflected light L2 having a phase angle of 90 °90Light quantity Q of reflected light L2 having a phase angle of 180 °180And the light quantity Q of the reflected light L2 having a phase angle of 270 DEG270. The phase here is a phase angle between the pulse of the irradiation light L1 and the pulse of the reflected light L2.

The phase angle α of the pulse of the reflected light L2 with respect to the irradiation light L1 can be represented by a circle as shown in fig. 6, for example. In fig. 6, the horizontal axis represents the light quantity Q of the reflected light L2 having a phase angle of 0 °0And a light quantity Q of reflected light L2 having a phase angle of 180 DEG180The vertical axis represents the light quantity Q of the reflected light L2 having a phase angle of 90 °90And a light quantity Q of reflected light L2 having a phase angle of 270 DEG270The difference between them.

Then, the phase angle α can be detected by, for example, the light quantity Q detected as described above0、Q90、Q180And Q270Obtained by substituting the following formula (1).

Here, the phase angle α of the pulse of the reflected light L2 with respect to the irradiation light L1 corresponds to the round trip of the distance D from the ToF sensor 1 to the object 90. Therefore, the distance D from the ToF sensor 1 to the object 90 can be calculated by substituting the phase angle α calculated by equation (1) into equation (2) below. In the formula (2), Δ t is a time difference from emission of the irradiation light L1 to reception of the reflected light L2, and ω is a modulation frequency fmodC is the speed of light.

However, in the above method, since the uncertainty of the phase angle of 360 ° deteriorates, the distance D to the object 90 having the phase angle α exceeding 360 ° cannot be accurately measured. For example, at the modulation frequency of the illuminating light L1fmodIn the case of 100MHz (megahertz), the distance D cannot be obtained for the object 90 existing at a position exceeding about 1.5m (meter) in consideration of the distance to and from the object 90.

Thus, in this case, different modulation frequencies f are usedmodTo measure the distance to the object 90. Therefore, since the degradation can be solved based on the result, the distance D to the object 90 existing at a certain distance or more can be specified.

As described above, in the ToF sensor 1, one range image is created by acquiring four kinds of phase information of 0 °, 90 °, 180 °, and 270 °.

Then, as the pixel architecture, there is generally a 2-tap type pixel architecture in which one unit pixel includes two memories as described above with reference to fig. 3 and 4, and in this case, four subframes are required to acquire one ranging image (hereinafter referred to as a depth map or a depth frame).

Specifically, four sub-frames of 0 °/180 °, 90 °/270 °, 180 °/0 °, and 270 °/90 ° are required. Note that the sub-frame of 0 °/180 ° is obtained by the light quantity Q from the phase angle α of 0 °0Light quantity Q of minus 180 DEG180And obtaining the sub-frame. Similarly, the sub-frame of 90 °/270 ° is obtained by the light quantity Q of 90 ° from the phase angle α90Light quantity Q of minus 270 DEG270The obtained sub-frame, 180 °/0 ° sub-frame is obtained by the light quantity Q from the phase angle α of 180 °180Light quantity Q of minus 0 DEG0The sub-frame obtained, 270 °/90 ° is obtained by measuring the quantity of light Q from the phase angle α of 270 °270Light quantity Q of minus 90 DEG90And obtaining the sub-frame.

Here, for example, the reason why a sub-frame of 0 °/180 ° and a sub-frame of 180 °/0 ° as inverted data of the sub-frame are required is that the charges accumulated in the two memories of each unit pixel have a difference (hereinafter referred to as a characteristic difference) due to the arrangement of the readout circuit (including the wiring distance and the like) and the incident angle (i.e., the image height) of incident light and the like. That is, in order to obtain an accurate depth frame, it is necessary to cancel the characteristic difference occurring in the two memories by acquiring the inverted data and adding or subtracting the inverted data.

As described above, in the 2-tap type pixel architecture, since a characteristic difference occurs in the two memories, there is a problem that the number of sub-frames required to acquire one depth frame increases.

Therefore, in the following description, a configuration capable of more efficiently acquiring a subframe will be described by way of some examples.

1.6 example of the constitution of the Unit Pixel

Hereinafter, a configuration example of the unit pixel 20 according to the first embodiment will be explained by some examples.

1.6.1 first constitutional example

Fig. 7 is a circuit diagram showing an example of a circuit configuration of a unit pixel according to a first configuration example of the first embodiment. Fig. 8 is a plan view showing an example of a planar layout of a unit pixel according to a first configuration example of the first embodiment. Note that fig. 8 shows a plan layout example of an element formation face of a semiconductor substrate on which the photodiodes 211 and 212 of the unit pixel 20-1 are formed. In the following description, when the photodiodes 211 and 212 are not distinguished, they are denoted by reference numeral 21.

As shown in fig. 7, the unit pixel 20-1 according to the first configuration example includes two sets of 2-tap type circuit configurations, and has a circuit configuration in which the four readout circuits 20a1, 20a2, 20B1, and 20B2 constituting the circuit share one floating diffusion region 27. In the following description, when the readout circuits 20a1 and 20a2 are not distinguished, they are referred to as readout circuits a, and when the readout circuits 20B1 and 20B2 are not distinguished, they are referred to as readout circuits B.

The readout circuit 20a1 includes a transfer gate transistor 23a1, a memory 24a1, a transfer transistor 25a1, a reset transistor 26, a floating diffusion region 27, an amplification transistor 28, and a selection transistor 29. Similarly, the readout circuit 20a2 includes a transfer gate transistor 23a2, a memory 24a2, a transfer transistor 25a2, a reset transistor 26, a floating diffusion region 27, an amplification transistor 28, and a selection transistor 29, the readout circuit 20B1 includes a transfer gate transistor 23B1, a memory 24B1, a transfer transistor 25B1, a reset transistor 26, a floating diffusion region 27, an amplification transistor 28, and a selection transistor 29, and the readout circuit 20B2 includes a transfer gate transistor 23B2, a memory 24B2, a transfer transistor 25B2, a reset transistor 26, a floating diffusion region 27, an amplification transistor 28, and a selection transistor 29.

The cathode of the photodiode 211 is connected to the readout circuits 20a1 and 20B1, and the cathode of the photodiode 212 is connected to the readout circuits 20a2 and 20B 2.

Further, the OFG transistor 221 is connected to the cathode of the photodiode 211, and the OFG transistor 222 is connected to the cathode of the photodiode 212.

Among the four readout circuits 20a1, 20a2, 20B1, and 20B2, the readout circuit a is configured to detect the light quantity Q of a component having a phase angle α of 0 ° or 90 ° with respect to the irradiation light L1 in the reflected light L20Or Q90The readout circuit B is configured to detect the light quantity Q of a component having a phase angle α of 180 ° or 270 ° with respect to the irradiation light L1 in the reflected light L2180Or Q270. Note that the light amount Q of the component having the phase angle α of 90 °90And a light quantity Q having a component of a phase angle alpha of 270 DEG270The amount of light Q corresponding to a component having a phase angle alpha of 0 DEG0And a light quantity Q having a component of a phase angle alpha of 180 DEG180For example, readout can be performed in a time division manner (time division) by alternately switching readout from the same unit pixel 20-1.

Further, two readout circuits a are connected to the cathode of the photodiode 211, and the remaining two readout circuits B are connected to the cathode of the photodiode 212.

Further, the four readout circuits 20a1, 20a2, 20B1, and 20B2 share the floating diffusion region 27, the reset transistor 26, the amplification transistor 28, and the selection transistor 29. The connection relationship of the circuit elements in each of the readout circuits 20A1, 20A2, 20B1, and 20B2 may be similar to that of the circuit elements in the readout circuits 20A and 20B of the unit pixel 920 described above with reference to fig. 3.

As shown in fig. 8, in the planar layout of the unit pixel 20-1, the readout circuits a and B for detecting components having the same phase angle α are arranged in the pixel area assigned to one unit pixel 20-1 so as to be point-symmetrical or line-symmetrical with respect to the center of the pixel area or with the straight line passing through the center as an axis. For example, the readout circuits 20a1 and 20a2 are diagonally arranged in the pixel region allocated to one unit pixel 20-1, and the readout circuits 20B1 and 20B2 are also diagonally arranged in the pixel region allocated to one unit pixel 20-1.

Specifically, in the example shown in fig. 8, the readout circuit 20a1 is arranged at the upper left in the pixel region, and the readout circuit 20a2 is arranged at the lower right in the pixel region. On the other hand, among the readout circuits 20B1 and 20B2, the readout circuit 20B1 is disposed at the upper right in the pixel region, and the readout circuit 20B2 is disposed at the lower left in the pixel region.

As described above, in the first configuration example, the four readout circuits 20a1, 20a2, 20B1, and 20B2 are arranged such that the readout circuits for detecting the light amounts Q of the same phase angle α are arranged in a crossed manner.

Note that the photodiodes 211 and 212 may be arranged between the readout circuits for generating the same sub-frame. For example, the photodiode 211 may be disposed between the readout circuits 20a1 and 20B1, and the photodiode 212 may be disposed between the readout circuits 20a2 and 20B 2.

In this configuration, when the light quantity Q having the phase angle α of 0 ° or 90 ° is detected0Or Q90Is stored in a read circuitBoth the charge in the memory 24a1 of the 20a1 and the charge stored in the memory 24a2 of the readout circuit 20a2 are transferred to the floating diffusion region 27. Similarly, when the light quantity Q having the phase angle α of 180 ° or 270 ° is detected180Or Q270At this time, both the electric charges stored in the memory 24B1 of the readout circuit 20B1 and the electric charges stored in the memory 24B2 of the readout circuit 20B2 are transferred to the floating diffusion region 27.

As described above, in one unit pixel 20-1, the readout circuits for detecting components of the same phase angle α are positioned diagonally, and the electric charges stored in the memories at the time of readout are simultaneously transferred to the common floating diffusion region 27, and therefore, the difference in the amount of accumulated electric charges due to the characteristic difference caused by the position (image height) or the like of the readout circuits can be reduced. As a result, a high-quality depth frame can be generated without acquiring inverted data, so that a high-quality depth frame can be generated at a high frame rate.

In addition, by sharing the configuration downstream of the floating diffusion region 27 (the reset transistor 26, the amplification transistor 28, the selection transistor 29, the vertical signal line VSL, the AD converter in the column processing circuit 104, and the like) among each of the readout circuits 20a1, 20a2, 20B1, and 20B2, it is possible to eliminate the characteristic difference caused by the downstream configuration, so that it is possible to generate a higher-quality depth frame. Note that downstream refers to downstream in the signal and data streams.

1.6.2 second constitutional example

Fig. 9 is a circuit diagram showing an example of a circuit configuration of a unit pixel according to a second example of the configuration of the first embodiment. Fig. 10 is a plan view showing an example of a planar layout of a unit pixel according to a second configuration example of the first embodiment. Note that fig. 10 shows a plan layout example of the element formation face of the semiconductor substrate on which the photodiodes 211 to 214 of the unit pixel 20-2 are formed.

As shown in fig. 9, the unit pixel 20-2 according to the second configuration example includes four sets of 2-tap type circuit configurations, and includes a circuit configuration in which 8 readout circuits 20a1 to 20a4 and 20B1 to 20B4 constituting the circuit share one floating diffusion region 27. In the following description, when the readout circuits 20a1 to 20a4 are not distinguished, they are referred to as readout circuits a, and when the readout circuits 20B1 to 20B4 are not distinguished, they are referred to as readout circuits B.

The circuits of the readout circuits 20a1, 20a2, 20B1, and 20B2 may be similar to the circuit configuration described with reference to fig. 7 in the first configuration example. Further, the readout circuit 20A3 includes a transfer gate transistor 23A3, a memory 24A3, a transfer transistor 25A3, a reset transistor 26, a floating diffusion region 27, an amplification transistor 28, and a selection transistor 29, the readout circuit 20a4 includes a transfer gate transistor 23a4, a memory 24a4, a transfer transistor 25a4, a reset transistor 26, a floating diffusion region 27, an amplification transistor 28, and a selection transistor 29, the readout circuit 20B3 includes a transfer gate transistor 23B3, a memory 24B3, a transfer transistor 25B3, a reset transistor 26, a floating diffusion region 27, an amplification transistor 28, and a selection transistor 29, and the readout circuit 20B4 includes a transfer gate transistor 23B4, a memory 24B4, a transfer transistor 25B4, a reset transistor 26, a floating diffusion region 27, an amplification transistor 28, and a selection transistor 29.

The cathode of the photodiode 211 is connected to the readout circuits 20a1 and 20B1, the cathode of the photodiode 212 is connected to the readout circuits 20a2 and 20B2, the cathode of the photodiode 213 is connected to the readout circuits 20A3 and 20B3, and the cathode of the photodiode 214 is connected to the readout circuits 20a4 and 20B 4.

Further, the OFG transistor 221 is connected to the cathode of the photodiode 211, the OFG transistor 222 is connected to the cathode of the photodiode 212, the OFG transistor 223 is connected to the cathode of the photodiode 213, and the OFG transistor 224 is connected to the cathode of the photodiode 214.

Of the 8 readout circuits 20a1 to 20a4 and 20B1 to 20B4, the readout circuit a is configured to detect the light quantity Q of a component having a phase angle α of 0 ° or 90 ° with respect to the irradiation light L1 in the reflected light L20Or Q90The readout circuit B is configured to detect the light quantity Q of a component having a phase angle α of 180 ° or 270 ° with respect to the irradiation light L1 in the reflected light L2180Or Q270

In addition, the 8 readout circuits 20a1 to 20a4 and 20B1 to 20B4 share the floating diffusion region 27, the reset transistor 26, the amplification transistor 28, and the selection transistor 29. The connection relationship of the circuit elements in each of the readout circuits 20A1 to 20A4 and 20B1 to 20B4 may be similar to that of the circuit elements in the readout circuits 20A and 20B of the unit pixel 920 described above with reference to fig. 3.

As shown in fig. 10, in the planar layout of the unit pixel 20-2, the readout circuit a or B for detecting components having the same phase angle α is arranged in the pixel region assigned to one unit pixel 20-2 so as to be point-symmetrical or line-symmetrical with respect to the center of the pixel region or with the straight line passing through the center as an axis.

At this time, of the 8 readout circuits 20a1 to 20a4 and 20B1 to 20B4, the readout circuits for generating the same sub-frame are arranged adjacent to each other across the photodiodes connected to them. For example, the readout circuits 20a1 and 20B1 are adjacent to each other with the photodiode 211 interposed therebetween, the readout circuits 20a2 and 20B2 are adjacent to each other with the photodiode 212 interposed therebetween, the readout circuits 20A3 and 20B3 are adjacent to each other with the photodiode 213 interposed therebetween, and the readout circuits 20a4 and 20B4 are adjacent to each other with the photodiode 214 interposed therebetween.

In the example shown in fig. 10, the light amount Q for detecting a component having a phase angle α of 0 ° or 90 ° is0Of the readout circuits 20a1 to 20a4, the readout circuit 20a1 is arranged at the upper left in the pixel region, the readout circuit 20A3 is arranged at the upper right in the pixel region, and the readout circuits 20a2 and 20a4 are arranged at the lower side near the center of the pixel region. On the other hand, in the light amount Q for detecting a component having a phase angle α of 180 ° or 270 °, respectively180The readout circuit 20B1 through 20B4 in (1), the readout circuit 20B1 is arranged at the lower left in the pixel region, the readout circuit 20B3 is arranged at the lower right in the pixel region, and the readout circuits 20B2 and 20B4 are arranged at the upper side near the center of the pixel region.

That is, in the example shown in fig. 10, the layout is such that: both the cross hanging arrangement (cross hanging arrangement) shown in fig. 8 and the arrangement obtained by turning the cross hanging arrangement are arranged.

In this configuration, when the light quantity Q at the phase angle α of 0 ° or 90 ° is detected0At the time of reading, the memory 24A stored in the sense circuits 20A 1-20A 4The charges in 1 to 24a4 are simultaneously transferred to the floating diffusion region 27. Similarly, when the light quantity Q at the phase angle α of 180 ° or 270 ° is detected180At this time, the charges stored in the memories 24B1 to 24B4 of the readout circuits 20B1 to 20B4 are simultaneously transferred to the floating diffusion region 27.

By such an operation, in addition to the effects obtained in the first configuration example, it is possible to further reduce the difference in the amount of accumulated charges due to the characteristic difference caused by the position (image height) of the readout circuit and the like, as compared with the first configuration example. Therefore, a higher quality depth frame can be generated without acquiring inverted data.

1.6.3 third constitutional example

For example, the circuit configuration example of the unit pixel 20-3 according to the third configuration example may be similar to the circuit configuration example described with reference to fig. 9 in the second configuration example. Fig. 11 is a plan view showing an example of a planar layout of a unit pixel according to a third configuration example of the first embodiment. Note that fig. 11 shows a plan layout example of the element formation face of the semiconductor substrate on which the photodiodes 211 to 214 of the unit pixel 20-3 are formed.

As can be seen by comparing fig. 10 and 11, in the second configuration example, the reset transistor 26, the amplification transistor 28, and the selection transistor 29 are provided in the separate diffusion regions 26a, 28a, and 29a, respectively, while in the third configuration example, the reset transistor 26, the amplification transistor 28, and the selection transistor 29 are provided in the common diffusion region 26 b.

According to this configuration, the circuit area in each unit pixel 20-3 can be reduced. As a result, the light receiving areas of the photodiodes 211 to 214 can be increased, the storage capacities of the memories 24a1 to 24a4 and 24B1 to 24B4 can be increased, and the like, and therefore, a higher-quality depth frame can be generated in addition to the effect obtained in the second configuration example.

1.6.4 fourth constitution example

In the first to third configuration examples described above, the light quantity Q of the component having the phase angle α of 90 ° is read out in a time division manner (time division) by alternately switching and using one unit pixel 2090And has a phase of 270 DEGLight quantity Q of component of angle α270And a light quantity Q having a component of a phase angle alpha of 0 DEG0And a light quantity Q having a component of a phase angle alpha of 180 DEG180

On the other hand, in the fourth configuration example, it will be explained by way of example that the light quantity Q of the component having the phase angle α of 90 ° can be read out simultaneously from one unit pixel 2090And a light quantity Q having a component of a phase angle alpha of 270 DEG270And a light quantity Q having a component of a phase angle alpha of 0 DEG0And a light quantity Q having a component of a phase angle alpha of 180 DEG180The case (1).

Fig. 12 is a circuit diagram showing an example of a circuit configuration of a unit pixel according to a fourth configuration example of the first embodiment. Fig. 13 is a plan view showing an example of a planar layout of a unit pixel according to a fourth configuration example of the first embodiment. Note that fig. 13 shows a plan layout example of the element formation face of the semiconductor substrate on which the photodiodes 211 to 214 of the unit pixel 20-4 are formed.

As shown in fig. 12, the unit pixel 20-4 according to the fourth configuration example has, for example, a circuit configuration similar to that of the unit pixel 20-2 explained with reference to fig. 9 in the second configuration example. However, in the fourth configuration example, of the 8 readout circuits 20a1 to 20a4 and 20B1 to 20B4 in the second configuration example, two readout circuits 20a1 and 20a4 are used as the light quantity Q for reading out the component having the phase angle α of 0 °0The two sensing circuits 20B1 and 20B4 are used as a light quantity Q for sensing a component having a phase angle α of 180 ° to the sensing circuit a180The readout circuit B of (1). Then, among the remaining readout circuits 20a2, 20A3, 20B2, and 20B3 in the second configuration example, the readout circuits 20a2 and 20B3 are used as the light quantity Q for reading out the component having the phase angle α of 90 °90The readout circuits 20C1 and 20C2, the readout circuits 20A3 and 20B2 are used as the light quantity Q for the component having the readout phase angle α of 270 °27020D1 and 20D 2. In the following description, when the readout circuits 20C1 and 20C2 are not distinguished, they are referred to as readout circuits C, and when the readout circuits 20D1 and 20D2 are not distinguished, they are referred to as readout circuits D.

In this way, the light amounts Q of the components having the phase angles α of 0 °, 90 °, 180 °, and 270 ° are read out by allocating two of the 8 readout circuits 20a1, 20a4,20B 1, 20B4,20C 2, 20C3, 20D2, and 20D3, respectively0、Q90、Q180And Q270Four subframes of 0 °/180 °, 90 °/270 °, 180 °/0 °, and 270 °/90 ° can be acquired at a time. In other words, by spatially dividing the eight readout circuits 20a1, 20a4,20B 1, 20B4,20C 2, 20C3, 20D2, and 20D3 with respect to the components having the phase angle α of 0 °, 90 °, 180 °, and 270 °, four subframes of 0 °/180 °, 90 °/270 °, 180 °/0 °, and 270 °/90 ° can be acquired at a time.

As a result, the readout operation when generating one range image can be significantly shortened, so that a high-quality depth frame can be generated at a high frame rate.

Note that, as shown in fig. 13, also in the fourth configuration example, in the planar layout of the unit pixel 20-4, by arranging the readout circuit for detecting components having the same phase angle α within the pixel region assigned to one unit pixel 20-4 so as to be point-symmetrical or line-symmetrical with respect to the center of the pixel region or with respect to a straight line passing through the center as an axis, it is possible to reduce the difference in the accumulated charge amount due to the characteristic difference caused by the position (image height) or the like of the readout circuit, so that it is possible to generate a high-quality depth frame at a high frame rate.

1.6.5 fifth constitutional example

In the fifth configuration example, a basic configuration of the 2-tap type unit pixel 20 sharing one floating diffusion region 27 will be exemplified.

Fig. 14 is a circuit diagram showing an example of a circuit configuration of a unit pixel according to a fifth configuration example of the first embodiment. As shown in fig. 14, the unit pixel 20-5 according to the fifth configuration example has the following circuit configuration: here, two readout circuits 20A and 20B are connected to one photodiode 21, and the two readout circuits 20A and 20B share one floating diffusion region 27.

According to such a circuit configuration, as described above, by simultaneously transferring the charges stored in the memory at the time of readout to the common floating diffusion region 27, it is possible to reduce the difference in the amount of stored charges due to the characteristic difference caused by the position (image height) or the like of the readout circuit. As a result, a high-quality depth frame can be generated without acquiring inverted data, so that a high-quality depth frame can be generated at a high frame rate.

In addition, by sharing the configuration (the reset transistor 26, the amplification transistor 28, the selection transistor 29, the vertical signal line VSL, the AD converter in the column processing circuit 104, and the like) downstream of the floating diffusion region 27 between each of the readout circuits 20A and 20B, the characteristic difference caused by the downstream configuration can be eliminated, so that a higher-quality depth frame can be generated.

1.6.6 sixth configuration example

In the first to fifth configuration examples described above, the configuration of a so-called 2-tap type circuit in which two readout circuits share one photodiode 21 is exemplified, but the present invention is not limited to this configuration. For example, a so-called three-tap type circuit configuration in which three readout circuits share one photodiode 21 may also be employed.

Fig. 15 is a circuit diagram showing an example of a circuit configuration of a unit pixel according to a sixth configuration example of the first embodiment. As shown in fig. 15, the unit pixel 20-6 according to the sixth configuration example has the following circuit configuration: among them, three readout circuits 20A, 20B, and 20C are connected to one photodiode 21, and the three readout circuits 20A, 20B, and 20C share one floating diffusion region 27.

Even in such a three-tap type unit pixel 20-6, similarly to the above configuration example, a high-quality depth frame can be generated without acquiring inverted data, and therefore a high-quality depth frame can also be generated at a high frame rate.

In addition, by sharing the configuration (the reset transistor 26, the amplification transistor 28, the selection transistor 29, the vertical signal line VSL, the AD converter in the column processing circuit 104, and the like) downstream of the floating diffusion region 27 among the respective readout circuits 20A, 20B, and 20C, the characteristic difference caused by the downstream configuration can be eliminated, so that a higher-quality depth frame can be generated.

1.6.7 seventh construction example

Further, a so-called four-tap type circuit configuration in which four readout circuits share one photodiode 21 may also be employed.

Fig. 16 is a circuit diagram showing a circuit configuration example of a unit pixel according to a seventh configuration example of the first embodiment. As shown in fig. 16, the unit pixel 20-7 according to the seventh configuration example has the following circuit configuration: among them, four readout circuits 20A, 20B, 20C, and 20D are connected to one photodiode 21, and the four readout circuits 20A, 20B, 20C, and 20D share one floating diffusion region 27.

Even in such a four-tap type unit pixel 20-7, similarly to the above configuration example, a high-quality depth frame can be generated without acquiring inverted data, and therefore a high-quality depth frame can also be generated at a high frame rate.

In addition, by sharing the configuration downstream of the floating diffusion region 27 (the reset transistor 26, the amplification transistor 28, the selection transistor 29, the vertical signal line VSL, the AD converter in the column processing circuit 104, and the like) among the respective readout circuits 20A, 20B, 20C, and 20D, the characteristic difference caused by the downstream configuration can be eliminated, so that a higher-quality depth frame can be generated.

1.7 Pixel separation Structure

Next, a structure for optically separating the plurality of unit pixels 20 arranged adjacent to each other in the pixel array section 101 will be explained.

By optically separating the adjacent unit pixels 20, crosstalk due to light incident on a certain unit pixel 20 being incident on another adjacent unit pixel 20 can be reduced, and therefore, a depth frame with higher accuracy can be generated.

For the optical separation of the adjacent unit pixels 20, for example, a pixel separation portion formed by forming a groove in a semiconductor substrate on which the photodiode 21 is formed and embedding a predetermined material in the groove can be used.

Therefore, a planar layout example of the pixel separating section will be explained by some examples. Note that it is assumed that a planar layout example in the following description is a planar layout example on an element formation face of a semiconductor substrate on which the photodiode 21 is formed.

1.7.1 first layout example

Fig. 17 is a plan view showing a planar layout example of a pixel separating section according to a first layout example of the first embodiment. Note that, in the first layout example, a case where the unit pixel 20-1 according to the first configuration example described with reference to fig. 7 and 8 is optically separated using the pixel separation section will be described.

As shown in fig. 17, for the unit pixels 20-1 arranged in a matrix form in the pixel array section 101, a pixel separation section 31 is provided along the boundary section 30 between the adjacent unit pixels 20-1. Therefore, each unit pixel 20-1 is surrounded from all directions by the pixel separation section 31.

1.7.2 second layout example

Fig. 18 is a plan view showing a planar layout example of a pixel separating section according to a second layout example of the first embodiment. Note that, in the second layout example, a case where the unit pixel 20-2 according to the second configuration example described with reference to fig. 9 and 10 is optically separated using the pixel separation section will be described.

As shown in fig. 18, for the unit pixels 20-2 arranged in a matrix form in the pixel array section 101, similarly to the first layout example shown in fig. 17, the pixel separation section 31 is provided along the boundary section 30 between the adjacent unit pixels 20-2. Therefore, each unit pixel 20-2 is surrounded by the pixel separation section 31 from all directions.

Further, in the second layout example, in the pixel region surrounded by the pixel separation section 31, the boundary sections 30 of the paired readout circuits 20a1 and 20B1, 20B2 and 20a2, 20B3 and 20A3, and 20a4 and 20B4 are optically separated by the element separation section 32.

Specifically, the element separating portions 32 are provided between the readout circuits 20a1 and 20B1 and the readout circuits 20B2 and 20a2, between the readout circuits 20a1 and 20B1 and the readout circuits 20B3 and 20A3, between the readout circuits 20B2 and 20a2 and the readout circuits 20a4 and 20B4, and between the readout circuits 20B3 and 20A3 and the readout circuits 20a4 and 20B4, respectively.

By optically separating the paired readout circuits, crosstalk of light between the plurality of photodiodes 211 to 214 included in the unit pixel 20 can be reduced, and thus a depth frame with higher accuracy can be generated.

Note that the structure of the element separation section 32 may be similar to that of the pixel separation section 31, for example.

1.7.3 third layout example

Fig. 19 is a plan view showing a planar layout example of a pixel separating section according to a third layout example of the first embodiment. Note that, in the third layout example, a case where the unit pixel 20-3 according to the third configuration example described with reference to fig. 9 and 11 is optically separated using the pixel separation section will be described, but a similar structure can also be applied to the unit pixel 20-4 according to the fourth configuration example described with reference to fig. 12 and 13.

As shown in fig. 19, the pixel separation section 31 and the element separation section 32 according to the third layout example have similar structures to the pixel separation section 31 and the element separation section 32 illustrated in the second layout example. However, in the second layout example, the element separating section 32 is divided at the central portion of the pixel region divided by the pixel separating section 31.

The reset transistor 26, the floating diffusion region 27, the amplifying transistor 28, and the selection transistor 29, which are shared by the plurality of readout circuits 20a1 to 20a4 and 20B1 to 20B4, are arranged in the central portion of the pixel region divided by the pixel separation section 31. This is because by arranging these circuit elements in the central portion of the pixel region, variations in wiring distances from the respective photodiodes 211 to 214 to the circuit elements can be minimized.

Even with this structure, since the pair of readout circuits are optically separated, crosstalk can be reduced and a depth frame with higher accuracy can be generated.

1.8 example of the Cross-sectional Structure of the Unit Pixel

Next, a cross-sectional structure example of the unit pixel 20 will be explained by some examples. In the following description, a sectional structure of a section taken along a line I-I in fig. 17 and a sectional structure of a section taken along a line II-II in fig. 17 are exemplified. However, in the cross-sectional structure of the cross section taken along the line II-II, the configuration around one photodiode 212 of the two photodiodes 211 and 212 and the diffusion region between the amplification transistor 28 and the reset transistor 26 are omitted for the sake of simplifying the explanation.

1.8.1 first section Structure example

Fig. 20 is a sectional view taken along line I-I showing an example of a sectional structure of a unit pixel according to a first sectional structure example of the first embodiment, and fig. 21 is a sectional view taken along line II-II showing an example of a sectional structure of a unit pixel according to a first sectional structure example of the first embodiment.

As shown in fig. 20 and 21, the unit pixel 20 has, for example, a configuration in which a photodiode 211 (and a photodiode 212) is formed in a region divided by the pixel separation section 31 in the semiconductor substrate 40.

The photodiode 211 (and the photodiode 212) includes, for example, an n-type semiconductor region 42 in which donors are diffused at a low concentration; an n-type semiconductor region 43 in which the donor concentration is higher than that of the n-type semiconductor region 42; and an n + -type semiconductor region 44 in which donors are diffused at a high concentration. The electric charges generated by photoelectric conversion in the n-type semiconductor regions 42 and 43 are brought into the n + -type semiconductor region 44 having a deep potential along the potential gradient, and are transferred to the memory 24A or 24A at the timing when the transfer gate 23A or 23B having the hollowed-out portion is turned on.

The circuit elements explained with reference to fig. 7 and 8, that is, the OFG transistors 221 (and 222), the transfer gate transistors 23a1, 23a2, 23B1, and 23B2, the memories 24a1, 24a2, 24B1, and 24B2, the transfer transistors 25a1, 25a2, 25B1, and 25B2, the reset transistor 26, the floating diffusion region 27, the amplification transistor 28, and the selection transistor 29 are formed on an element formation face (lower face in the drawing) of the semiconductor substrate 40. Among them, fig. 20 shows the transfer gate transistors 23a1 and 23B1, the memories 24a1 and 24B1, the transfer transistors 25a1 and 25B1, and the floating diffusion region 27, and fig. 21 shows the OFG transistor 221 (and 222). Note that although the floating diffusion regions 27 are respectively shown in fig. 20, the floating diffusion regions may be connected by a wiring 52 in a wiring layer 50 described later.

As shown in fig. 20, the transfer gate transistors 23a1 and 23B1 (and 23a2 and 23B2) may be vertical transistors having a vertical structure formed in the substrate thickness direction of the semiconductor substrate 40. Further, as shown in fig. 21, the OFG transistors 221 (and 222) may be double vertical transistors formed in the substrate thickness direction of the semiconductor substrate 40. Further, the transfer gate transistors 23a1 and 23B1 (and 23a2 and 23B2) and the OFG transistor 221 (and 222) may be vertical transistors having a double structure including the above-described two vertical structures. However, these are merely examples, and various modifications may be made. Note that the insulating film 51 in fig. 20 and 21 is a gate insulating film of each circuit element formed on the semiconductor substrate 40.

On the element formation surface of the semiconductor substrate 40, a wiring layer 50 including wirings 52 connected to the respective circuit elements formed on the semiconductor substrate 40 is formed.

For example, the concave-convex structure 45 is formed on the back surface (upper surface in the drawing) of the semiconductor substrate 40, that is, on the light incident surface. In this way, by providing the concave-convex structure 45 on the light incident surface, the incident surface can have a structure in which the refractive index gradually changes. As a result, the incident light is effectively diffracted to extend the optical path length of the incident light in the semiconductor substrate 40, and the reflectance of the incident light is reduced, thus enabling more light to be incident on the photodiodes 211 (and 212). As a result, since the quantum efficiency of the photodiodes 211 (and 212) is improved, a depth frame with higher accuracy can be generated. Note that the period of the periodic uneven structure 45 can be, for example, 300nm or more.

On the back surface of the semiconductor substrate 40, an insulating film 61, a planarization film 63 on the insulating film 61, and an on-chip lens 64 on the planarization film 63 are provided.

Further, at the boundary portion 30 between the adjacent unit pixels 20 on the planarization film 63, a light shielding film 62 for preventing color mixing between the adjacent pixels is provided. As the light shielding film 62, for example, a material having light shielding properties such as tungsten (W) can be used.

As the semiconductor substrate 40, for example, a p-type silicon substrate or the like can be used, and the substrate thickness thereof is reduced to a thickness of, for example, 20 μm (micrometers) or less. Note that the thickness of the semiconductor substrate 40 may be 20 μm or more, and the thickness may be appropriately determined according to the target characteristics of the light receiving unit 14 or the like.

The insulating film 61 has a function of an antireflection film against incident light in addition to a function of pinning the incident surface of the semiconductor substrate 40. The insulating film 61 is made of, for example, silicon nitride (SiN) or aluminum oxide (Al)2O3) Silicon oxide (SiO)2) Hafnium oxide (HfO)2) Or tantalum oxide (Ta)2O5) And the like. The thickness of the insulating film 61 is about 1/4 wave plates optical thickness with respect to the near infrared ray, and can be, for example, 50nm or more and 150nm or less. The planarizing film 63 may be formed using a material such as silicon oxide (SiO), for example2) Or a film formed of an insulating material such as silicon nitride (SiN).

For the on-chip lens 64, for example, silicon oxide (SiO) can be used2) Or transparent resin, or the like, and the curvature of the on-chip lens 64 is set so that the incident light is condensed near the center of the photodiode 211 (or 212).

The pixel separation section 31 according to the first cross-sectional structure example has a structure obtained by processing a material such as silicon oxide (SiO), for example2) The insulating material is embedded in a trench penetrating from the element formation surface to the back surface of the semiconductor substrate 40, so-called FFTI (full front trench isolation: full front trench isolation) type structure.

1.8.2 second Cross-section Structure example

Fig. 22 is a sectional view taken along line I-I showing a sectional structure example of a unit pixel according to a second sectional structure example of the first embodiment, and fig. 23 is a sectional view taken along line II-II showing a sectional structure example of a unit pixel according to the second sectional structure example of the first embodiment.

As can be seen by comparing fig. 20 and 21 with fig. 22 and 23, the unit pixel 20 according to the second cross-sectional structural example has a cross-sectional structure similar to that of the unit pixel 20 according to the first cross-sectional structural example in which the FFTI-type pixel separation section 31 is replaced by a so-called RDTI (reverse deep trench isolation) type pixel separation section 33.

The RDTI-type pixel separation section 33 can be formed by, for example, applying a material such as silicon oxide (SiO)2) The insulating material is embedded in a groove engraved from the element formation surface of the semiconductor substrate 40 to such an extent as not to penetrate the semiconductor substrate 40.

Note that the configuration of the pixel separation section 33 can also be applied to the element separation section 32.

1.8.3 example of third Cross-section Structure

Fig. 24 is a sectional view taken along line I-I showing a sectional structure example of a unit pixel according to a third sectional structure example of the first embodiment, and fig. 25 is a sectional view taken along line II-II showing a sectional structure example of a unit pixel according to the third sectional structure example of the first embodiment.

As can be seen by comparing fig. 20 and 21 with fig. 24 and 25, the unit pixel 20 according to the third cross-sectional structural example has a cross-sectional structure similar to that of the unit pixel 20 according to the first cross-sectional structural example in which the FFTI-type pixel separation section 31 is replaced with the FFTI-type pixel separation section 34.

The pixel separation section 34 includes, for example: an insulating film 341, the insulating film 341 covering an inner surface of the trench penetrating the front and back surfaces of the semiconductor substrate 40; and a light shielding portion 342, the light shielding portion 342 being embedded in a trench formed by the insulating film 341.

For example, the insulating film 341 can be formed using a material such as silicon oxide (SiO)2) Etc. insulating material. On the other hand, for example, tungsten (W), aluminum (Al), or the like can be used for the light shielding portion 342.

In this way, by providing the light shielding portion 342 inside the pixel separation portion 34, the adjacent unit pixels 20 can be optically separated better, so that a depth frame with higher accuracy can be generated.

Note that the configuration of the pixel separation section 34 can also be applied to the element separation section 32.

1.8.4 example of fourth cross-section structure

Fig. 26 is a sectional view taken along line I-I showing a sectional structure example of a unit pixel according to a fourth sectional structure example of the first embodiment, and fig. 27 is a sectional view taken along line II-II showing a sectional structure example of a unit pixel according to the fourth sectional structure example of the first embodiment.

As can be seen by comparing fig. 24 and 25 with fig. 26 and 27, the unit pixel 20 according to the fourth cross-sectional structural example has a cross-sectional structure similar to that of the unit pixel 20 according to the third cross-sectional structural example in which the FFTI-type pixel separation section 34 is replaced with the RDTI-type pixel separation section 35.

The RDTI-type pixel separation unit 35 includes, for example: an insulating film 351 covering an inner surface of a trench engraved from an element formation surface of the semiconductor substrate 40 to such an extent as not to penetrate the semiconductor substrate 40; and a light shielding portion 352, the light shielding portion 352 being embedded in a trench formed by the insulating film 351.

For example, the insulating film 351 can be formed using a material such as silicon oxide (SiO)2) Etc. insulating material. On the other hand, for example, tungsten (W), aluminum (Al), or the like can be used for the light shielding portion 352.

Note that the configuration of the pixel separation section 35 can also be applied to the element separation section 32.

1.9FD shared layout

Next, whether the FD (shared floating diffusion region 27) can be shared for each variation of the arrangement of the readout circuit will be described. Note that, in the following description, the H direction denotes a row direction in the matrix array of the unit pixels 20, and the V direction denotes a column direction. In addition, in the drawings mentioned in the following description, the readout circuits a and B or the readout circuits C and D forming a pair for acquiring one subframe are surrounded by solid lines and/or broken lines. The areas separated by the solid line indicate a case where FD sharing cannot be achieved, and the areas separated by the broken line indicate a case where FD sharing can be achieved.

Further, in the first to sixteenth modifications of the following examples, the pixel region 70 of each unit pixel 20 is divided into 4 regions (hereinafter, referred to as divided regions) 71 to 74 of 2 × 2. In each of the divisional areas 71 through 74, one photodiode 21 (not shown) and one OFG transistor 22 (not shown) are arranged in addition to the paired two readout circuits a and B or C and D.

1.9.1 first variant

FIG. 2Fig. 8 is a schematic diagram showing a plan layout example of a memory according to a first variation of the first embodiment. As shown in fig. 28, in the first modification, in each of the divisional areas 71-74, the light amount Q for detecting a component having a phase angle α of 0 ° (or 90 °)0(or Q)90) Is arranged on the left side for detecting the light quantity Q having a component of the phase angle α of 180 ° (or 270 °)180(or Q)270) Is arranged on the right side. That is, in the first variation, the unit pixels 920 of the basic configuration explained above with reference to fig. 3 are arranged in a matrix form.

In this layout, since the symmetry of the memories 24A and 24B cannot be ensured in the H direction, that is, each of the total four readout circuits a and B arranged in the divisional areas 71 and 73 and the total four readout circuits a and B arranged in the divisional areas 72 and 74, one floating diffusion area 27 cannot be shared.

In addition, similarly in the V direction, since the symmetry of the memories 24A and 24B cannot be ensured in each of the total four readout circuits a and B arranged in the divisional areas 71 and 72 and the total four readout circuits a and B arranged in the divisional areas 73 and 74, one floating diffusion area 27 cannot be shared.

1.9.2 second variant

Fig. 29 is a schematic diagram showing a plan layout example of a memory according to a second variation of the first embodiment. As shown in fig. 29, in the second modification, in the divisional areas 71 and 73, the readout circuit a is disposed on the left side and the readout circuit B is disposed on the right side. On the other hand, in the divisional areas 72 and 74, the readout circuit a is disposed on the right side and the readout circuit B is disposed on the left side.

In this layout, since the symmetry of the memory in the H direction cannot be ensured, the total of four readout circuits a and B arranged in the divisional areas 71 and 73 and the total of four readout circuits a and B arranged in the divisional areas 72 and 74 cannot share one floating diffusion area 27.

On the other hand, in the V direction, since the symmetry of the memory is ensured, a total of four readout circuits a and B arranged in the divisional areas 71 and 72 and a total of four readout circuits a and B arranged in the divisional areas 73 and 74 can share one floating diffusion area 27, respectively.

1.9.3 third variant

Fig. 30 is a schematic diagram showing a plan layout example of a memory according to a third variation of the first embodiment. As shown in fig. 30, in the third modification, in the divisional areas 71 and 72, the readout circuit 20A is disposed on the left side and the readout circuit B is disposed on the right side. On the other hand, in the divisional areas 73 and 74, the readout circuit a is disposed on the right side and the readout circuit B is disposed on the left side.

In this layout, since the symmetry of the memory in the H direction is ensured, a total of four readout circuits a and B arranged in the divisional areas 71 and 73 and a total of four readout circuits a and B arranged in the divisional areas 72 and 74 can share one floating diffusion area 27, respectively.

On the other hand, since the symmetry of the memory in the V direction cannot be ensured, the total of four readout circuits a and B arranged in the divisional areas 71 and 72 and the total of four readout circuits a and B arranged in the divisional areas 73 and 74 cannot share one floating diffusion area 27.

1.9.4 fourth variant

Fig. 31 is a schematic diagram showing a plan layout example of a memory according to a fourth variation of the first embodiment. As shown in fig. 31, in the fourth modification, in the divisional areas 71 and 74, the readout circuit a is disposed on the left side and the readout circuit B is disposed on the right side. On the other hand, in the divisional areas 72 and 73, the readout circuit a is disposed on the right side and the readout circuit B is disposed on the left side.

In this layout, since the symmetry of the memory in the H direction is ensured, a total of four readout circuits a and B arranged in the divisional areas 71 and 73 and a total of four readout circuits a and B arranged in the divisional areas 72 and 74 can share one floating diffusion area 27, respectively.

Further, similarly in the V direction, since the symmetry of the memory is ensured, a total of four readout circuits a and B arranged in the divisional areas 71 and 72 and a total of four readout circuits a and B arranged in the divisional areas 73 and 74 can share one floating diffusion area 27, respectively.

1.9.5 fifth variant

Fig. 32 is a schematic diagram showing a plan layout example of a memory according to a fifth variation of the first embodiment. As shown in fig. 32, in the fifth modification, in each of the divisional areas 71-74, the readout circuit a is disposed on the upper side and the readout circuit B is disposed on the lower side.

In this layout, similar to the first variation, since the symmetry of the memory cannot be ensured in both the H direction and the V direction, the floating diffusion region 27 cannot be shared.

1.9.6 sixth variant

Fig. 33 is a schematic diagram showing a plan layout example of a memory according to a sixth variation of the first embodiment. As shown in fig. 33, in the sixth modification, in the divisional areas 71 and 73, the readout circuit a is disposed on the upper side and the readout circuit B is disposed on the lower side. On the other hand, in the divisional areas 72 and 74, the readout circuit a is disposed on the lower side and the readout circuit B is disposed on the upper side.

In this layout, similar to the second variation, since the symmetry of the memory is ensured in the H direction, the floating diffusion region 27 can be shared. However, since the symmetry of the memory cannot be ensured in the V direction, the floating diffusion region 27 cannot be shared.

1.9.7 seventh variation

Fig. 34 is a schematic diagram showing a plan layout example of a memory according to a seventh variation of the first embodiment. As shown in fig. 34, in the seventh variation, in the divisional areas 71 and 72, the readout circuit a is disposed on the upper side and the readout circuit B is disposed on the lower side. On the other hand, in the divisional areas 73 and 74, the readout circuit a is disposed on the lower side and the readout circuit B is disposed on the upper side.

In this layout, similarly to the third variation, one floating diffusion region 27 can be shared because the symmetry of the memory is ensured in the H direction, but the floating diffusion region 27 cannot be shared because the symmetry of the memory is not ensured in the V direction.

1.9.8 eighth variant

Fig. 35 is a schematic diagram showing a plan layout example of a memory according to an eighth variation of the first embodiment. As shown in fig. 35, in the eighth modification, in the divisional areas 71 and 74, the readout circuits 20a1 and 20a4 are disposed on the upper side, and the readout circuits 20B1 and 20B4 are disposed on the lower side. On the other hand, in the divisional areas 72 and 73, the readout circuits 20a2 and 20A3 are disposed on the lower side, and the readout circuits 20B2 and 20B3 are disposed on the upper side.

In this layout, similarly to the fourth variation, since the symmetry of the memory is ensured in both the H direction and the V direction, the floating diffusion region 27 can be shared in each direction.

1.9.9 ninth variant

Fig. 36 is a schematic diagram showing a plan layout example of a memory according to a ninth variation of the first embodiment. As shown in fig. 36, in a ninth variation, in each of the divisional areas 71-74, a light amount Q for detecting a component having a phase angle α of 0 ° or 90 °0Or Q90Are arranged on the left side for detecting the light quantity Q having a component of the phase angle α of 180 ° or 270 °180Or Q270Are arranged on the right side.

In such a layout, not only the symmetry of the memory in the H direction and the V direction but also the symmetry of the memory with respect to the center of the pixel region 70 or with respect to a straight line passing through the center as an axis cannot be ensured. Therefore, the difference in the characteristics of the memory or the difference in the effects thereof cannot be eliminated.

1.9.10 tenth variation

Fig. 37 is a schematic diagram showing a plan layout example of a memory according to a tenth variation of the first embodiment. As shown in fig. 37, in the tenth modification, in the divisional areas 71 and 73, the readout circuits a and C are disposed on the left side, and the readout circuits B and D are disposed on the right side. On the other hand, in the divisional areas 72 and 74, the readout circuits a and C are disposed on the right side, and the readout circuits B and D are disposed on the left side.

In this layout, the floating diffusion region 27 cannot be shared in both the H direction and the V direction because the symmetry of the memory in the H direction and the V direction cannot be ensured, but since the symmetry of the memory with respect to the center of the pixel region 70 or with a straight line passing through the center as an axis can be ensured, the difference in the characteristics of the memory can be eliminated.

1.9.11 eleventh variant

Fig. 38 is a schematic diagram showing a plan layout example of a memory according to an eleventh variation of the first embodiment. As shown in fig. 38, in the eleventh modification, in the divisional areas 71 and 72, the readout circuits a and C are disposed on the left side, and the readout circuits B and D are disposed on the right side. On the other hand, in the divisional areas 73 and 74, the readout circuits a and C are disposed on the right side, and the readout circuits B and D are disposed on the left side.

In this layout, the floating diffusion region 27 can be shared because the symmetry of the memory in the H direction is ensured, but the floating diffusion region 27 cannot be shared because the symmetry of the memory in the V direction cannot be ensured.

Note that since the symmetry of the memory with the straight line passing through the center of the pixel region 70 as an axis is ensured, the difference in the characteristics of the memory can be eliminated.

1.9.12 twelfth variant

Fig. 39 is a schematic diagram showing a plan layout example of a memory according to a twelfth variation of the first embodiment. As shown in fig. 39, in the twelfth modification, in the divisional areas 71 and 74, the readout circuit a is disposed on the left side and the readout circuit B is disposed on the right side. On the other hand, in the divisional areas 72 and 73, the readout circuits C are disposed on the right side and the readout circuits D are disposed on the left side.

In this layout, not only the symmetry of the memory in the H direction and the V direction but also the symmetry of the memory with respect to the center of the pixel region 70 or with respect to a straight line passing through the center as an axis cannot be ensured. Therefore, the difference in the characteristics of the memory or the difference in the effects thereof cannot be eliminated.

1.9.13 thirteenth variant

Fig. 40 is a schematic diagram showing a plan layout example of a memory according to a thirteenth variation of the first embodiment. As shown in fig. 40, in the thirteenth modification, in each of the divisional areas 71-74, the readout circuits a and C are disposed on the upper side, and the readout circuits B and D are disposed on the lower side.

In this layout, similarly to the ninth modification, since not only the symmetry of the memory in the H direction and the V direction but also the symmetry of the memory with respect to the center of the pixel region 70 or with a straight line passing through the center as an axis cannot be ensured. Therefore, the difference in the characteristics of the memory or the difference in the effects thereof cannot be eliminated.

1.9.14 fourteenth variant

Fig. 41 is a diagram showing a plan layout example of a memory according to a fourteenth variation of the first embodiment. As shown in fig. 41, in the fourteenth modification, in the divisional areas 71 and 73, the readout circuits a and C are disposed on the upper side, and the readout circuits B and D are disposed on the lower side. On the other hand, in the divisional areas 72 and 74, the readout circuits a and C are disposed on the lower side, and the readout circuits B and D are disposed on the upper side.

In this layout, similarly to the tenth modification, since the symmetry of the memory in the H direction and the V direction is not ensured, the floating diffusion region 27 cannot be shared in both the H direction and the V direction, but since the symmetry of the memory with respect to the center of the pixel region 70 or with a straight line passing through the center as an axis is ensured, the difference in the characteristics of the memory can be eliminated.

1.9.15 fifteenth variant

Fig. 42 is a schematic diagram showing a plan layout example of a memory according to a fifteenth modification of the first embodiment. As shown in fig. 42, in the fifteenth modification, in the divisional areas 71 and 73, the readout circuits a and C are disposed on the upper side, and the readout circuits B and D are disposed on the lower side. On the other hand, in the divisional areas 72 and 74, the readout circuits a and C are disposed on the lower side, and the readout circuits B and D are disposed on the upper side.

In this layout, in contrast to the eleventh variation, the floating diffusion region 27 cannot be shared because the symmetry of the memory in the H direction cannot be ensured, but the floating diffusion region 27 can be shared because the symmetry of the memory in the V direction can be ensured.

Note that since the symmetry of the memory with the straight line passing through the center of the pixel region 70 as an axis can be ensured, the difference in the characteristics of the memory can be eliminated.

1.9.16 sixteenth variant

Fig. 43 is a schematic diagram showing a plan layout example of a memory according to a sixteenth modification of the first embodiment. As shown in fig. 43, in the sixteenth modification, in the divisional areas 71 and 74, the readout circuit a is disposed on the upper side and the readout circuit B is disposed on the lower side. On the other hand, in the divisional areas 72 and 73, the readout circuits C are disposed on the lower side and the readout circuits D are disposed on the upper side.

In this layout, similarly to the twelfth variation, since not only the symmetry of the memory in the H direction and the V direction but also the symmetry of the memory with respect to the center of the pixel region 70 or with a straight line passing through the center as an axis cannot be ensured. Therefore, the difference in the characteristics of the memory or the difference in the effects thereof cannot be eliminated.

1.10 Elimination of characteristic Difference

Next, elimination of characteristic differences according to the present embodiment will be explained by way of example.

Note that in this specification, as a comparative example, the first modification shown in fig. 28 or the fifth modification shown in fig. 32 (FD sharing in the H direction and the V direction cannot be achieved) is referred to, and as an example for explaining the effect of characteristic difference elimination according to the present embodiment, the fourth modification shown in fig. 31 or the eighth modification shown in fig. 35 (FD sharing in the H direction and the V direction can be achieved) is referred to.

Further, in this specification, a comparative example will be described by applying the unit pixel 920 described with reference to fig. 3 and 4, and the present embodiment will be described by applying the unit pixel 20-1 according to the first configuration example described with reference to fig. 7 and 8.

Fig. 44 is a diagram for explaining a difference in the accumulated charge amount of each memory generated in the comparative example. Fig. 45 is a diagram for explaining an effect of eliminating the characteristic difference of the respective memories according to the first embodiment.

As shown in fig. 44, in the comparative example, the floating diffusion region 27 cannot be shared in both the H direction and the V direction (see fig. 28 or fig. 32). Therefore, the electric charges 81A accumulated in the respective floating diffusion areas 27A and the electric charges 81B accumulated in the floating diffusion areas 27B become electric charges transferred from one memory 24A or 24B, respectively, and an effect of eliminating the characteristic difference by accumulating the electric charges read out from the plurality of memories ensuring symmetry in the common floating diffusion area 27 cannot be obtained.

Since the two memories 24A and 24B are arranged symmetrically with respect to the optical center of the on-chip lens 64 in the unit pixel 920M belonging to the area having a low image height (i.e., the area near the center of the pixel array section 101), a large characteristic difference is not exhibited.

On the other hand, in a region where the optical axis of incident light is greatly inclined and the image height is high, that is, in the unit pixels 920UL, 920UR, 920LL, and 920LR belonging to the peripheral region of the pixel array section 101, the two memories 24A and 24B are arranged to be greatly eccentric with respect to the optical center of the on-chip lens 64 due to pupil correction, and thus a large characteristic difference occurs.

Therefore, as shown in fig. 45, by adopting a configuration in which charges are transferred from two memories (corresponding to the memories 24a1 and 24a2 or the memories 24B1 and 24B2) whose symmetry is ensured to the shared floating diffusion region 27, it is possible to reduce the difference in the amount of accumulated charges caused by the characteristic difference accumulated in the floating diffusion region 27.

For example, in the unit pixel 20UL (two blocks surrounded by a broken line) sharing one floating diffusion region 27 in the upper left region of the pixel array section 101, when the light amount Q having a component of the phase angle α of 0 ° (or 90 °) is detected0(or Q)90) When the charges a81 and a82 accumulated in the two memories 24a1 and 24a2 whose symmetry is ensured are transferred to the shared floating diffusion region 27, the light amount Q having a component of the phase angle α of 180 ° (or 270 °) is detected180(or Q)270) At this time, the charges B81 and B82 accumulated in the two memories 24B1 and 24B2 whose symmetry is ensured are transferred to the shared floating diffusion region 27.

Similarly, in the unit pixel 20LL sharing one floating diffusion region 27 in the lower left area of the pixel array section 101, the unit pixel 20UR sharing one floating diffusion region 27 in the upper right area of the pixel array section 101, the unit pixel 20LR sharing one floating diffusion region 27 in the lower right area of the pixel array section 101, and the unit pixel 20M sharing one floating diffusion region 27 in the central area of the pixel array section 101, the electric charges a81 and a82 or the electric charges B81 and B82 accumulated in the two memories 24a1 and 24a2 or 24B1 and 24B2 which ensure symmetry are transferred to the shared floating diffusion region 27.

As a result, since the difference in the charge accumulation amount caused by the characteristic difference of the memory is eliminated in the floating diffusion region 27, a sub-frame with higher accuracy can be generated. As a result, a depth frame with higher accuracy can be obtained.

1.11 example of read-out operation of a range image (depth frame)

Next, a readout operation of the range image (depth frame) according to the present embodiment will be explained by way of example.

Fig. 46 is a timing chart showing a readout operation of a depth frame in the case of using a unit pixel not including the FD sharing structure according to the first embodiment. Note that the unit pixel according to the first embodiment which does not include the FD sharing structure may be, for example, the unit pixel 920 described above with reference to fig. 3.

On the other hand, fig. 47 is a timing chart showing a readout operation of a depth frame in the case of using the unit pixels having the FD sharing structure (for example, the unit pixels according to the first to third configuration examples described above) according to the first embodiment. Further, fig. 48 is a timing chart showing a readout operation of a depth frame in the case of using a unit pixel having an FD sharing structure (for example, the unit pixel according to the fourth configuration example described above) according to the first embodiment.

As shown in fig. 46 to 48, the readout operation for reading out one sub-frame includes resetting of each unit pixel 20, accumulating electric charges generated by photoelectric conversion in each unit pixel 20 in the memories 24A and 24B, readout of the electric charges accumulated in the memories 24A and 24B, and a dead time period (dead time) at the time of switching the phase. Note that the phase may be a phase of a pulse period for distributing electric charges generated in the photodiode 21 to the memory 24A and the memory 24B based on a pulse period of the irradiation light L1, and the phase switching may be an operation of switching the phase (corresponding to the phase angle α).

Further, as shown in fig. 46, in the case where the unit pixel 20 does not have the FD sharing structure, as described above, in order to eliminate the characteristic difference, four subframes must be acquired: subframe #1 of 0 °/180 °, subframe #2 of 90 °/270 °, subframe #3 of 180 °/0 °, and subframe #4 of 270 °/90 °. Thus, the time required to acquire one depth frame is approximately four times the time required to acquire four subframes.

On the other hand, in the first to third structural examples in which the unit pixel 20 has the FD sharing structure as shown in fig. 47, since the difference in the accumulated charge amount based on the characteristic difference is reduced in the floating diffusion region 27, it is not necessary to acquire the inverted sub-frames #3 and #4 as shown in fig. 47. Therefore, the time required to acquire one depth frame is twice the time required to acquire two subframes, i.e., half the time shown in fig. 46.

Further, as shown in fig. 48, in the fourth structural example, the unit pixel 20 has the FD sharing structure, and the light amount Q having a component of the phase angle α of 0 ° can be read out in one readout operation0Light quantity Q having a component of phase angle α of 90 °90Light quantity Q having a component of phase angle α of 180 °180And a light quantity Q having a component of a phase angle alpha of 270 DEG270. Therefore, the time required to acquire one depth frame is equal to the time required to acquire one subframe, i.e., 1/4 of the time illustrated in fig. 46.

As described above, according to the present embodiment, a high-quality range image (depth frame) can be obtained in a short time.

1.12 drive pulse example

Next, a driving pulse when the electric charge generated in the photodiode 21 is distributed to each memory will be explained by some examples. In the following description, it is assumed that the distance from the light emitting unit 13 and the light receiving unit 14 to the object 90 is 1m (meter), and the distance (2m) from the light emitting unit 13 to the light receiving unit 14 via the object 90 corresponds to one pulse period of the irradiation light L1 emitted from the light emitting unit 13. Further, in the drawings used in the following description, hatched areas superimposed on the driving pulses VGA to VGD represent examples of the amount of electric charge accumulated in the memory to which the driving pulses are applied.

1.12.1 first drive pulse example

First, the driving pulse examples of the unit pixels 20 exemplified as the second to fourth variations and the sixth to eighth variations will be explained as the first driving pulse example. Fig. 49 is a waveform diagram for explaining an example of the first drive pulse of the first embodiment.

In the second to fourth modifications and the sixth to eighth modifications shown in fig. 29 to 31 and fig. 33 to 32, the memories 24a1 to 24a4 in the readout circuits 20a1 to 20a4 are connected to a common drive line (reference numeral of the drive line is also VGA) to which a drive pulse VGA is applied, and the memories 24B1 to 24B4 in the readout circuits 20B1 to 20B4 are connected to a common drive line (reference numeral of the drive line is also VGB) to which a drive pulse VGB is applied.

As shown in fig. 49, the light amount Q for acquiring a component having a phase angle α of 0 ° with respect to the irradiation light L10May be a pulse having the same frequency and the same phase as the irradiation light L1 emitted from the light emitting unit 13 (i.e., a driving pulse for driving the light emitting unit 13).

On the other hand, the light amount Q for acquiring the component having the phase angle α of 180 ° with respect to the irradiation light L1180The driving pulse VGB of (1) may be a pulse having the same frequency as the irradiation light L1 emitted from the light emitting cell 13 (i.e., a driving pulse for driving the light emitting cell 13) and shifted in phase by 180 °.

As shown in fig. 49, in the present embodiment, when one sub-frame is acquired, the operation of distributing the electric charge generated in the photodiode 21 to the memories 24a1 to 24a4 and 24B1 to 24B 4a plurality of times (4 times in fig. 49) is performed a plurality of times (2 times in fig. 49) a plurality of times (for example, times T10 to T11 and times T20 to T20). In this specification, a period in which the electric charges generated in the photodiode 21 are allocated to the memories 24a1 to 24a4 and 24B1 to 24B 4a plurality of times (4 times in fig. 49) (for example, timings T10 to T11 and timings T20 to T20) is referred to as a charge transfer period.

In the example shown in fig. 49, first, after the charge transfer period (T10 to T11) in which the electric charges generated in the photodiode 21 are distributed to the memories 24a1 to 24a4 and the memories 24B1 to 24B4, the non-light emission period (timings T11 to T20) of the irradiation light L1 passes. Then, next, in a state where the phase of the irradiation light L1 and the drive pulses VGA and VGB is inverted, after the electric charges generated in the photodiode 21 are distributed to the electric charge transfer periods (timings T20 to T21) of the memories 24a1 to 24a4 and the memories 24B1 to 24B4, the non-light emission period (timings T21 to T30) of the irradiation light L1 passes.

By performing the charge transfer to each memory in such a flow, the charge of each component having a phase angle α (═ 0 ° and 180 °, or 90 ° and 270 °) with respect to the irradiation light L1 can be accumulated in each memory. Note that a point at which the phase of the irradiation light L1 and the drive pulses VGA and VGB is inverted in different charge transfer periods will be described in "encoding of the accumulation period" described later.

Further, as described above, in the present embodiment, the undistributed period (the times T11 to T20 and the times T21 to T30) in which the sub-frame is not acquired is provided between the periods in which the sub-frame is acquired (for example, the periods between the times T10 to T11 and the times T20 to T21).

In the undistributed period (time T11 to T20 and time T21 to T30), the drive pulse OFG applied to the gate of the OFG transistor 22(221 to 222 or 221 to 224) is set to a high level. As a result, the electric charge generated in the photodiode 21 during the undistributed period is discharged via the OFG transistor 22.

1.12.1.1 modification

Note that the drive pulse according to the first drive pulse example shown in fig. 49 is not limited to the connection relationships shown in fig. 29 to 31 and fig. 33 to 32 as the second to fourth variations and the sixth to eighth variations, and the drive pulse can also be applied to other connection relationships shown in fig. 50 to 55.

In the examples shown in fig. 50 to 55, as the drive lines to which the drive pulses VGA are applied, two drive lines VGA1 and VGA2 are provided, and as the drive lines to which the drive pulses VGB are applied, two drive lines VGB1 and VGB2 are provided.

Each of memories 24a 1-24 a4 and 24B 1-24B 4 is connected to one of the drive lines VGA1, VGA2, VGB1, and VGB2, so that the memories sharing floating diffusion region 27 are connected to different drive lines. However, in the fourth and eighth variations (see fig. 49 and 55) in which the eight memories 24a1 to 24a4 and the memories 24B1 to 24B4 all share one floating diffusion region 27, each drive line connects two memories without being limited by this condition.

Even in such a connection relationship, a sub-frame can be acquired by applying the first drive pulse example as explained with reference to fig. 49.

1.12.2 second drive pulse example

Next, an example of the drive pulse of the unit pixel 20 exemplified as the tenth to twelfth variations and the fourteenth to sixteenth variations will be explained as the second drive pulse example. Fig. 56 is a waveform diagram for explaining an example of the second drive pulse of the first embodiment.

In tenth to twelfth modifications and fourteenth to sixteenth modifications shown in fig. 37 to 39 and fig. 41 to 40, the memories 24a1 and 24a2 are connected to the common driving line VGA to which the driving pulse VGA is applied, the memories 24B1 and 24B2 are connected to the common driving line VGB to which the driving pulse VGB is applied, the memories 24C1 and 24C2 are connected to the common driving line VGC to which the driving pulse VGC is applied, and the memories 24D1 and 24D2 are connected to the common driving line VGD to which the driving pulse VGD is applied.

As shown in fig. 56, the drive pulses VGA and VGB may be similar to those in the first drive pulse example.

The light quantity Q for acquiring the component having the phase angle α of 90 ° with respect to the irradiation light L190May be the same as the irradiation light L1 emitted from the light emitting cell 13 (i.e., a driving pulse for driving the light emitting cell 13)Pulses shifted in frequency and phase by 90 deg..

Further, the light quantity Q for acquiring the component having the phase angle α of 270 ° with respect to the irradiation light L1270The driving pulse VGD of (1) may be a pulse having the same frequency as the irradiation light L1 emitted from the light emitting cell 13 (i.e., a driving pulse for driving the light emitting cell 13) and shifted in phase by 270 °.

As shown in fig. 56, similarly to the charge transfer explained with reference to fig. 49 in the first drive pulse example, the charge transfer to the respective memories using each of the drive pulses VGA to VGD may be an operation of alternately repeating charge distribution (timings T10 to T11 and timings T20 to T21) and charge discharge (timings T11 to T20 and timings T21 to T30) divided into a plurality of times.

By performing the charge transfer to each memory in such a flow, it is possible to accumulate the charge of each component having a phase angle α (═ 0 °, 90 °, 180 °, and 270 °) with respect to the irradiation light L1 in each memory.

1.13 encoding of accumulation periods

Next, the encoding of the accumulation period will be explained in detail with reference to the drawings.

The ToF method is a method of measuring a distance to an object by receiving reflected light of irradiated light emitted from a light emitting unit included in the ToF method. Therefore, when light other than the reflected light (hereinafter referred to as interference light) is incident on the light receiving unit, the light appears as noise, thereby hindering accurate distance measurement.

1.13.1 noise caused by interference

Here, noise generated by the interference light will be exemplified. In the following description, similarly to the description of the "driving pulse example", it is assumed that the distance from the light emitting unit 13 and the light receiving unit 14 to the object 90 is 1m (meter), and the distance (2m) from the light emitting unit 13 to the light receiving unit 14 via the object 90 corresponds to one pulse period of the irradiation light L1 emitted from the light emitting unit 13. Further, in the drawings used in the following description, hatched areas superimposed on the driving pulses VGA and VGB represent examples of the amount of electric charge accumulated in the memory to which the driving pulses are applied. In addition, in this specification, noise generated by interference will be described by taking as an example the unit pixel 20 exemplified as the second to fourth modifications and the sixth to eighth modifications.

1.13.1.1 interference caused by background light

One of the interferences that the ToF sensor 1 receives is interference caused by incidence of background light (also referred to as interference light) such as sunlight or illumination light on the light receiving unit 14. Fig. 57 is a diagram for explaining noise generated by background light as interference light.

As shown in (a) of fig. 57, when considering the span of acquiring one depth frame, the background light may be generally regarded as light having a constant intensity (i.e., light of a DC component). In this case, as shown in (b) of fig. 57, the electric charges accumulated in the memories 24a1 to 24a4 include electric charges (hereinafter referred to as electric charges of background light) 92A generated by photoelectrically converting background light in addition to electric charges (hereinafter referred to as electric charges of reflected light L2) 91A generated by photoelectrically converting reflected light L2. On the other hand, only the electric charges 92B of the background light are accumulated in the memories 24B1 to 24B4 in which the component of the reflected light L2 is not accumulated.

Here, as described above, since the background light is the light of the DC component, the electric charges 92A in the memories 24a1 to 24a4 and the electric charges 92B in the memories 24B1 to 24B4 have the same amount of electric charges. Therefore, as shown in (B) of fig. 57, by subtracting the charge amount of the charge 92B in the memories 24B1 to 24B4 from the total charge amount (total amount of the charge 91A and the charge 92A) in the memories 24a1 to 24a4, the charge amount of the charge 91A that reflects only the light L2 can be obtained, that is, noise caused by interference light (background light) can be eliminated.

1.13.1.2 interference from another ToF sensor

An example of another interference to which the ToF sensor 1 is subjected is interference caused by the incidence of reflected light of the irradiation light emitted from the light emitting unit of another ToF sensor different from the ToF sensor 1 onto the light receiving unit 14 of the ToF sensor 1 (this is referred to as "interference from another ToF sensor").

In the case of receiving interference from another ToF sensor, whether the interference appears as noise depends on whether reflected light (interference light) from the other ToF sensor is incident to the light receiving unit 14 within a period in which charge transfer is performed to the memories 24a1 to 24a4 or the memories 24B1 to 24B4 (hereinafter referred to as an accumulation period). Note that in the following description, a period in which charge transfer is not performed to the memories 24a1 to 24a4 or the memories 24B1 to 24B4 is referred to as a non-accumulation period.

1.13.1.2.1 when reflected light from another ToF sensor is incident during a non-accumulation period

Fig. 58 is a diagram for explaining a case where reflected light (interference light) from another ToF sensor is incident in the non-accumulation period.

As shown in (a) of fig. 58, in the non-accumulation period, in the case where the interference light is incident on the light receiving unit 14, the electric charge generated in the photodiode 21 by photoelectrically converting the interference light is not transferred to the memories 24a1 to 24a4 and the memories 24B1 to 24B4, and the electric charge is discharged via the OFG transistors 221 and 222 or the OFG transistors 221 to 224.

Therefore, as shown in (B) of fig. 58, only the electric charges 91A of the reflected light L2 are accumulated in the memories 24a1 to 24a4, and no electric charges are accumulated in the memories 24B1 to 24B4 in which the component of the reflected light L2 is not accumulated.

Therefore, as shown in (B) of fig. 58, in the case where the charge amount of the charges in the memories 24B1 to 24B4 is subtracted from the charge amount of the charges (the charges 91A and 92A) in the memories 24a1 to 24a4, the charge amount is the charge amount of the charge 91A of the reflected light L2. This means that the interference light does not generate noise.

1.13.1.2.2 when reflected light from another ToF sensor is incident during an accumulation period

Fig. 59 is a diagram for explaining a case where reflected light (interference light) from another ToF sensor is incident within an accumulation period. Fig. 59 illustrates a case where the pulse period of the irradiation light L1 coincides with the pulse period of the interference light and the phase of the irradiation light L1 coincides with the phase of the interference light.

As shown in (a) of fig. 59, in the case where the interference light is incident on the light receiving unit 14 within the accumulation period, the electric charges generated in the photodiode 21 by photoelectrically converting both the reflected light L2 and the interference light are transferred to the memories 24a1 to 24a4 and the memories 24B1 to 24B 4.

In this case, as shown in (b) of fig. 59, the electric charges accumulated in the memories 24a1 to 24a4 include electric charges (hereinafter referred to as electric charges of interference light) 93A generated by photoelectrically converting interference light in addition to electric charges (hereinafter referred to as electric charges of reflected light L2) 91A generated by photoelectrically converting reflected light L2. On the other hand, no electric charge is accumulated in the memories 24B1 to 24B4 in which the component of the reflected light L2 is not accumulated.

Therefore, as shown in (B) of fig. 59, in the case where the charge amount of the charges in the memories 24B1 to 24B4 is subtracted from the charge amount of the charges (the charges 91A and 93A) in the memories 24a1 to 24a4, the charge amount is the total charge amount of the charges 91A of the reflected light L2 and the charges 93A of the interference light. This means that, in the accumulation period, when reflected light from another ToF sensor is incident, unless the charge amount of interference light accumulated in the memories 24a1 to 24a4 matches the charge amount of interference light accumulated in the memories 24B1 to 24B4, noise caused by the interference light cannot be eliminated.

1.13.2 related to cancellation of interference noise

As described above, in the ranging sensor of the indirect ToF method, noise may be generated due to incidence of interference light, and thus ranging accuracy may be reduced.

Therefore, in the present embodiment, in the period in which one sub-frame is acquired, the phase of the irradiation light L1 (and the drive pulses VGA and VGB) in another accumulation period is opposite to the phase of the irradiation light L1 (and the drive pulses VGA and VGB) in a certain accumulation period. In this specification, this is referred to as encoding of the accumulation period.

For example, the encoding of the accumulation period can be managed by associating one accumulation period with one bit. In this case, for example, the phase of the irradiation light L1 (and the drive pulses VGA and VGB) may not be inverted in the accumulation period associated with the bit '0' (hereinafter referred to as code 0), and the phase of the irradiation light L1 (and the drive pulses VGA and VGB) may be inverted in the accumulation period associated with the bit '1' (hereinafter referred to as code 1).

Specifically, in the case where eight accumulation periods are performed to acquire one subframe, as a code for encoding the accumulation periods, 8-bit codes such as '01010101' and '00101011' can be used. The code encoding the accumulation period is preferably a code having a duty ratio of 50:50 between code 0 and code 1.

Note that as a code string for encoding the accumulation period, for example, a pseudo random number generated with a pseudo random number generator or the like, a code string prepared in advance, or the like can be used.

1.13.2.1 example of eliminating noise by encoding accumulation periods

Here, the elimination of noise by encoding the accumulation period will be exemplified. In the following description, similarly to the description of the "driving pulse example", it is assumed that the distance from the light emitting unit 13 and the light receiving unit 14 to the object 90 is 1m (meter), and the distance (2m) from the light emitting unit 13 to the light receiving unit 14 via the object 90 corresponds to one pulse period of the irradiation light L1 emitted from the light emitting unit 13. Further, in the drawings used in the following description, hatched areas superimposed on the driving pulses VGA and VGB represent examples of the amount of electric charge accumulated in the memory to which the driving pulses are applied. In addition, in this specification, noise generated by interference will be described by taking as an example the unit pixel 20 exemplified as the second to fourth modifications and the sixth to eighth modifications. Here, however, it is assumed that the non-accumulation period is not set.

1.13.2.1.1 case where the modulation frequency of the interference light from another ToF sensor is different from the modulation frequency of its own illumination light

Fig. 60 is a diagram for explaining noise cancellation according to the first embodiment in the case where the modulation frequency of interference light from another ToF sensor is different from the modulation frequency of its own irradiation light. Note that fig. 60 shows a case where four accumulation periods are repeated when one subframe is acquired. Further, in fig. 60, the code for encoding four accumulation periods is set to '0101'.

As shown in (a) of fig. 60, in the case where the modulation frequency of the reflected light from the other ToF sensor is different from the modulation frequency of the light L1 irradiated by itself, by encoding four accumulation periods using codes having the same duty ratio, the electric charges generated by photoelectrically converting the interference light (reflected light) from the other ToF sensor can be substantially uniformly distributed to the memories 24a1 to 24a4 and the memories 24a1 to 24a 4.

As a result, as shown in (B) of fig. 60, the charge amount of the electric charges 94A of the interference light included in the electric charges accumulated in the memories 24A1 to 24A4 and the charge amount of the electric charges 94B of the interference light included in the electric charges accumulated in the memories 24B1 to 24B4 become substantially equal.

Therefore, as shown in (B) of fig. 60, in the case where the charge amount of the charge 94B in the memories 24B1 to 24B4 is subtracted from the charge amount of the charges (charges 91A and 94A) in the memories 24A1 to 24A4, the charge amount is substantially equal to the charge amount of the charge 91A of the reflected light L2. This means that the noise generated by the interfering light has been eliminated to a negligible extent.

1.13.2.1.2 case where the modulation frequency of the interference light from another ToF sensor is the same as the modulation frequency of its own illumination light

Fig. 61 is a diagram for explaining noise cancellation according to the first embodiment in the case where the modulation frequency of interference light from another ToF sensor is the same as the modulation frequency of its own irradiation light. Note that, similarly to fig. 60, fig. 61 shows a case where four accumulation periods are repeated when one subframe is acquired, and a code (code) for encoding the four accumulation periods is '0101'.

As shown in (a) of fig. 61, in the case where the modulation frequency of reflected light from another ToF sensor is the same as the modulation frequency of its own irradiation light L1, by encoding four accumulation periods using codes having the same duty ratio, the total charge amount of the electric charges 94a0 transferred to the memories 24a1 to 24a4 in the accumulation period of code 0 and the electric charges 94a1 transferred to the memories 24a1 to 24a4 in the accumulation period of code 1 is equal to the total charge amount of the electric charges 94B0 transferred to the memories 24B1 to 24B4 in the accumulation period of code 0 and the electric charges 94a1 transferred to the memories 24B1 to 24B4 in the accumulation period of code 1, among the electric charges generated by photoelectrically converting interference light (reflected light) from another ToF sensor.

Therefore, as shown in (B) of fig. 61, in the case where the charge amount of the charges (charges 94B0 and 94B1) in the memories 24B1 to 24B4 is subtracted from the charge amount of the charges (charges 91A, 94a0 and 94a1) in the memories 24a1 to 24a4, the charge amount is the charge amount of the charge 91A of the reflected light L2. This means that noise generated by the interference light is eliminated.

1.13.2.1.3 case where the modulation frequency and phase of the interference light from another ToF sensor are the same as those of its own irradiation light

Fig. 62 is a diagram for explaining noise cancellation according to the first embodiment in the case where the modulation frequency and phase of interference light from another ToF sensor are the same as those of its own irradiation light. Note that, as in fig. 60 or fig. 61, fig. 62 shows a case where four accumulation periods are repeated when one subframe is acquired, and a code (code) for encoding the four accumulation periods is '0101'.

As shown in (a) of fig. 62, in the case where the modulation frequency and phase of the reflected light from the other ToF sensor are the same as those of the light L1 irradiated by itself, in the accumulation period of the code 0, the electric charges generated by photoelectrically converting the interference light (reflected light) from the other ToF sensor are transferred to the memories 24a1 to 24a4, and in the accumulation period of the code 1, the electric charges generated by photoelectrically converting the interference light (reflected light) from the other ToF sensor are transferred to the memories 24B1 to 24B 4.

Therefore, by encoding the four accumulation periods using codes having the same duty ratio, the electric charges generated by photoelectrically converting the interference light (reflected light) from another ToF sensor can be equally distributed to the memories 24a1 to 24a4 and the memories 24a1 to 24a 4.

As a result, as shown in (B) of fig. 62, the charge amount of the electric charges 94A of the interference light included in the electric charges accumulated in the memories 24A1 to 24A4 and the charge amount of the electric charges 94B of the interference light included in the electric charges accumulated in the memories 24B1 to 24B4 become substantially equal.

Therefore, as shown in (B) of fig. 62, in the case where the charge amount of the charge 94B in the memories 24B1 to 24B4 is subtracted from the charge amount of the charges (charges 91A and 94A) in the memories 24A1 to 24A4, the charge amount is the charge amount of the charge 91A of the reflected light L2. This means that noise generated by the interference light is eliminated.

1.13.3 noise generated during phase switching

However, in the encoding of the accumulation periods as described above, when the non-accumulation periods are not provided between the accumulation periods, the following phenomenon occurs: after the phase switching by the encoding of the accumulation period, a part of the reflected light L2 of the irradiation light L1 emitted before the phase switching is incident on the light receiving unit 14. As a result, a part of the electric charges that should originally be transferred to the memories 24a1 to 24a4 or the memories 24B1 to 24B4 is transferred to the memories 24B1 to 24B4 or the memories 24a1 to 24a1, so that the ranging accuracy may be lowered.

Fig. 63 is a waveform diagram showing a case where the ToF sensor and the object are in contact with each other, that is, the distance from the ToF sensor to the object is 0, and fig. 64 is a waveform diagram showing a case where the ToF sensor and the object are separated from each other (as an example, the distance from the ToF sensor to the object is a distance corresponding to one pulse period of the irradiation light). Note that both fig. 63 and 64 show a case where no non-accumulation period is set between accumulation periods.

As shown in (a) of fig. 63, in the case where the ToF sensor 1 is in contact with the object 90, all of the reflected light L2 of the irradiation light L1 emitted before the phase switching is incident on the light receiving unit 14 before the phase of the light receiving unit 14 is switched, that is, before the phases of the driving pulses VGA and VGB are switched according to the encoding of the accumulation period. Therefore, a part of the electric charge that should originally be transferred to memories 24a1 to 24a4 or memories 24B1 to 24B4 is not transferred to memories 24B1 to 24B4 or memories 24a1 to 24a 1. As shown in (B) of fig. 63, the charge amount of the electric charges 96 obtained by subtracting the electric charges accumulated in the memories 24B1 to 24B4 from the electric charges 95A accumulated in the memories 24a1 to 24a4 is a true charge amount corresponding to the amount of the reflected light L2.

On the other hand, as shown in (a) of fig. 64, in the case where the ToF sensor 1 and the object 90 are separated from each other, for example, in the case where the distance from the ToF sensor 1 to the object 90 is a distance (for example, 2m) corresponding to one pulse period of the irradiation light L1, after the phases of the driving pulses VGA and VGB are switched based on the encoding of the accumulation period, the last pulse of the reflected light L2 is incident on the light receiving unit 14. Therefore, a part of the electric charges that should originally be transferred to the memories 24a1 to 24a4 or the memories 24B1 to 24B4 is transferred to the memories 24B1 to 24B4 or the memories 24a1 to 24a1, and as shown in (B) of fig. 64, the electric charge amount of the electric charges 96 obtained by subtracting the electric charges 95B accumulated in the memories 24B1 to 24B4 from the electric charges 95A accumulated in the memories 24a1 to 24a4 is an electric charge amount including an error with respect to the true electric charge amount corresponding to the amount of the reflected light L2.

1.13.3.1 example of noise canceling operation at phase switching (case of 2-tap type)

Therefore, in the present embodiment, as shown in (a) of fig. 65, the non-accumulation period is provided between the accumulation periods. In this non-accumulation period, a drive pulse OFG of a high level is applied to the gates of the OFG transistors 221 to 224. As a result, electric charges generated when a part of the reflected light L2 of the irradiation light L1 emitted before the phase switching is incident on the light receiving unit 14 after the phase switching by the encoding of the accumulation period are discharged via the OFG transistors 221 to 224, and therefore, a phenomenon in which a part of the electric charges that should originally be transferred to the memories 24a1 to 24a4 or the memories 24B1 to 24B4 is transferred to the memories 24B1 to 24B4 or the memories 24a1 to 24a1 can be avoided. As a result, as shown in (B) of fig. 65, the charge amount of the electric charges 96 obtained by subtracting the electric charges accumulated in the memories 24B1 to 24B4 from the electric charges 95A accumulated in the memories 24a1 to 24a4 becomes a true charge amount corresponding to the light amount of the reflected light L2.

1.13.3.2 modified example of noise cancellation operation at phase switching

Fig. 65 shows a case where the two OFG transistors 221 and 222 are always on in the non-accumulation period, but the present disclosure is not limited to such an operation. For example, as shown in (a) of fig. 66, the driving pulse OFG1 supplied to the gate of the OFG transistor 221 and the driving pulse OFG2 supplied to the gate of the OFG transistor 222 in the non-accumulation period may be pulses having the same period as the driving pulses VGA and VGB.

As a result, the vertical drive circuit 103 that supplies the drive pulses VGA, VGB, OFG1 and OFG2 can continue the same operation in the accumulation period and the non-accumulation period, and therefore, the state of the voltage drop (IR drop) in each of the readout circuit a and the readout circuit B can be kept uniform. As a result, noise generated at the time of phase switching is reduced, and thus a depth frame with higher accuracy can be obtained.

1.13.3.3 modified example of noise canceling operation at the time of phase switching (case of multi-tap type with 3 taps or more)

Further, in the case where the readout circuit connected to one photodiode 21 is a 3-tap or more multi-tap type having 3 or more readout circuits, readout circuits other than 2 readout circuits of 3 taps or more may be used for resetting (discharging electric charges) of the photodiode 21. For example, the readout circuit 20C in fig. 15 or the readout circuits 20C and 20D in fig. 16 may be used for resetting (discharging electric charges) of the photodiode 21.

In this case, for example, as shown in (a) of fig. 67, in the non-accumulation period, the drive pulse VGC (or VGC and VGD) of the high level is applied to the gate of the transfer gate transistor 23C (or 23C and 23D) of the readout circuit 20C (or 20C and 20D).

As a result, the electric charges generated in the photodiode 21 in the non-accumulation period can be effectively discharged, and thus a more accurate depth frame can be acquired.

1.14 action and Effect

As described above, according to the present embodiment, since the electric charges stored in the memory at the time of readout are transferred to the common floating diffusion region 27, the difference in the amount of accumulated electric charges due to the characteristic difference of the respective readout circuits can be reduced. As a result, a high-quality depth frame can be generated without acquiring inverted data, so that a high-quality depth frame can be generated at a high frame rate.

Further, according to the present embodiment, since the plurality of readout circuits share the configuration (the reset transistor 26, the amplifying transistor 28, the selection transistor 29, the vertical signal line VSL, the AD converter in the column processing circuit 104, and the like) downstream of the floating diffusion region 27, the characteristic difference caused by the downstream configuration can be eliminated, so that a higher-quality depth frame can be generated.

Further, according to the present embodiment, since a plurality of accumulation periods when one sub-frame is acquired are encoded, noise generated due to interference with other ToF sensors can be reduced, thereby obtaining a depth frame with higher accuracy.

Further, according to the present embodiment, a non-accumulation period is provided between the accumulation period and the accumulation period, and the electric charges generated in the photodiode 21 in the non-accumulation period are discharged via the OFG transistors 221 and 222 or 221 to 224. Therefore, noise generated at the time of phase switching can be reduced, and a more accurate depth frame can be acquired.

2. Second embodiment

Next, the second embodiment will be described in detail with reference to the drawings. In the following description, the same configurations and operations as those of the above-described embodiment are denoted by the same reference numerals, and repeated description thereof will be omitted.

In the first embodiment, the unit pixel 20 having the following configuration is illustrated: in this configuration, the electric charges generated in the photodiode 21 are temporarily accumulated in the memory, and then the electric charges in the memory are transferred to the shared floating diffusion region 27. On the other hand, in the second embodiment, a unit pixel configured to directly transfer the electric charges generated in the photodiode 21 to the floating diffusion region will be exemplified.

2.1 first constitutional example

Fig. 68 is a circuit diagram showing an example of a circuit configuration of a unit pixel according to a first example of a configuration of the second embodiment. As shown in fig. 68, the unit pixel 120-1 according to the first configuration example has a configuration similar to that of the unit pixel 20-5 according to the fifth configuration example explained with reference to fig. 14 in the first embodiment, in which the transfer gate transistors 23A and 23B and the memories 24A and 24B in the readout circuits 20A and 20B are omitted. Further, in the unit pixel 120-1, a separate reset transistor 26A or 26B, floating diffusion region 27A or 27B, amplifying transistor 28A or 28B, and selection transistor 29A or 29B are provided for the readout circuits 20A and 20B, respectively.

For example, the driving pulse supplied to the unit pixel 120-1 having such a circuit configuration may be similar to the driving pulse explained with reference to fig. 65 or fig. 66 in the first embodiment.

As a result, in encoding the accumulation period, the electric charge generated in the photodiode 21 in the non-accumulation period is discharged via the OFG transistors 221 and 222 or 221 to 224, and therefore, it is also possible to reduce noise generated at the time of phase switching and obtain a depth frame with higher accuracy.

2.2 second constitutional example

Fig. 69 is a circuit diagram showing an example of a circuit configuration of a unit pixel according to a second example of the configuration of the second embodiment. As shown in fig. 69, the unit pixel 120-2 according to the second configuration example has a configuration similar to that of the unit pixel 20-6 according to the sixth configuration example explained with reference to fig. 15 in the first embodiment, in which the transfer gate transistors 23A, 23B, and 23C and the memories 24A, 24B, and 24C in the readout circuits 20A, 20B, and 20C are omitted. Further, in the unit pixel 120-2, a separate reset transistor 26A,26B, or 26C, a floating diffusion region 27A,27B, or 27C, an amplification transistor 28A,28B, or 28C, and a selection transistor 29A,29B, or 29C are provided for the readout circuits 20A, 20B, and 20C, respectively.

For example, the driving pulse supplied to the unit pixel 120-2 having such a circuit configuration may be similar to the driving pulse explained with reference to fig. 67 in the first embodiment.

As a result, the electric charges generated in the photodiode 21 in the non-accumulation period can be effectively discharged through the OFG transistor 22 and the readout circuit 20C, so that a more accurate depth frame can be obtained.

2.3 third constitutional example

Fig. 70 is a circuit diagram showing an example of a circuit configuration of a unit pixel according to a third configuration example of the second embodiment. As shown in fig. 70, the unit pixel 120-3 according to the third configuration example has a configuration similar to the unit pixel 20-7 according to the seventh configuration example described with reference to fig. 16 in the first embodiment, in which the transfer gate transistors 23A, 23B, 23C, and 23D and the memories 24A, 24B, 24C, and 24D in the readout circuits 20A, 20B, 20C, and 20D are omitted. Further, in the unit pixel 120-3, a separate reset transistor 26A,26B, 26C, or 26D, a floating diffusion region 27A,27B, 27C, or 27D, an amplification transistor 28A,28B, 28C, or 28D, and a selection transistor 29A,29B, 29C, or 29D are provided for the readout circuits 20A, 20B, 20C, and 20D, respectively.

For example, the driving pulse supplied to the unit pixel 120-3 having such a circuit configuration may be similar to the driving pulse explained with reference to fig. 67 in the first embodiment.

As a result, the electric charges generated in the photodiode 21 in the non-accumulation period can be effectively discharged through the OFG transistor 22 and the readout circuits 20C and 20D, so that a more accurate depth frame can be obtained.

Other configurations, operations, and effects may be similar to those of the above-described embodiment, and thus a detailed description is omitted here.

3. Configuration example of stacked solid-state imaging device to which the technology of the present disclosure can be applied

Fig. 71 is a diagram showing an outline of a configuration example of a non-stacked solid-state imaging device to which the technique according to the present disclosure can be applied. Fig. 72 and 73 are diagrams showing an outline of a configuration example of a stacked solid-state imaging device to which the technique according to the present disclosure can be applied.

Fig. 71 shows a schematic configuration example of a non-stacked solid-state image pickup device. As shown in fig. 71, a solid-state image pickup device 23010 includes one die (semiconductor substrate) 23011. Die 23011 has mounted thereon: a pixel region 23012 in which pixels are arranged in an array; a control circuit 23013 that performs various controls such as driving of pixels; and a logic circuit 23014 for signal processing.

Fig. 72 and 73 show a schematic configuration example of the stacked solid-state image pickup device. As shown in fig. 72 and 73, the solid-state image pickup device 23020 is configured as one semiconductor chip in which two dies of a sensor die 23021 and a logic die 23024 are stacked and electrically connected.

In fig. 72, a pixel region 23012 and a control circuit 23013 are mounted on a sensor die 23021, and a logic circuit 23014 including a signal processing circuit that performs signal processing is mounted on a logic die 23024.

In fig. 73, a pixel region 23012 is mounted on a sensor die 23021, and a control circuit 23013 and a logic circuit 23014 are mounted on a logic die 23024.

4. Examples of electronic devices to which techniques according to this disclosure can be applied

Fig. 74 and 75 are schematic diagrams illustrating examples of electronic devices to which the techniques according to the present disclosure can be applied. Note that in this specification, a smartphone is taken as an example of an electronic device to which the technology according to the present disclosure can be applied.

Fig. 74 shows a former configuration example of the smartphone. As shown in fig. 74, the smartphone 1000 includes, in front of a display 1001, an active Infrared (IR) light source 1131 as a light emitting unit 13 and front cameras 1141 and 1142 as light receiving units 14.

Further, as shown in fig. 75, the smartphone 1000 includes, on the rear side opposite to the front side on which the display 1001 is provided, an active IR light source 1133 as the light emitting unit 13 and front cameras 1143 and 1144 as the light receiving unit 14.

5. Various application examples

Next, an application example of the present technology will be explained.

For example, as shown in fig. 76, the present technology can be applied to various cases of sensing light such as visible light, infrared light, ultraviolet light, and X-rays.

Means for taking pictures for appreciation, e.g. digital camera or portable equipment with camera function

A device for traffic, for example, an in-vehicle sensor that captures images of the front, rear, periphery, interior, and the like of an automobile for safe driving such as automatic stop and recognizing the state of a driver; a monitoring camera for monitoring a running vehicle and a road; and a distance measuring sensor or the like for measuring the distance between vehicles

Devices for household appliances such as television, refrigerator and appliance for taking images of user gestures and for operating the appliance according to the gestures

Devices for medical care, e.g. endoscopes or devices for angiography by receiving infrared light

Means for security, e.g. surveillance cameras for crime prevention or cameras for personal authentication, etc

Devices for cosmetic purposes, e.g. skin measuring instruments for taking the skin or microscopes for taking the scalp

Means for sports, e.g. action cameras or wearable cameras for sports or the like

Means for agriculture, e.g. cameras for monitoring the condition of fields and crops

6. Application example of Mobile body

The techniques according to the present disclosure may be applied to a variety of products. For example, the technology according to the present disclosure may be implemented as a device mounted on any type of mobile body such as an automobile, an electric automobile, a hybrid electric automobile, a motorcycle, a bicycle, a personal mobile device, an airplane, a drone, a boat, and a robot.

Fig. 77 is a block diagram showing a schematic configuration example of a vehicle control system as an example of a mobile body control system to which the technique according to the present disclosure can be applied.

The vehicle control system 12000 includes a plurality of electronic control units connected via a communication network 12001. In the example shown in fig. 77, the vehicle control system 12000 includes a drive system control unit 12010, a vehicle body system control unit 12020, an outside-vehicle information detection unit 12030, an inside-vehicle information detection unit 12040, and an integrated control unit 12050. Further, as a functional configuration of the integrated control unit 12050, a microcomputer 12051, a sound image output unit 12052, and an in-vehicle network interface (I/F)12053 are shown.

The drive system control unit 12010 controls the operations of devices related to the drive system of the vehicle according to various programs. For example, the drive system control unit 12010 functions as a control device of: a driving force generating device such as an internal combustion engine or a driving motor for generating a driving force of the vehicle; a driving force transmission mechanism for transmitting a driving force to a wheel; a steering mechanism that adjusts a steering angle of the vehicle; a brake device for generating a braking force of the vehicle, and the like.

The vehicle body system control unit 12020 controls the operations of various devices mounted on the vehicle body according to various programs. For example, the vehicle body system control unit 12020 functions as a control device of: keyless entry system, smart key system, power window device, and various lamps such as a head lamp, a back lamp, a brake lamp, a turn signal lamp, and a fog lamp. In this case, a radio wave transmitted from a portable device that replaces a key or a signal of various switches may be input to the vehicle body system control unit 12020. The vehicle body system control unit 12020 receives input of these radio waves or signals, and controls a door lock device, a power window device, a lamp, and the like of the vehicle.

Vehicle exterior information detection section 12030 detects information on the exterior of the vehicle to which vehicle control system 12000 is attached. For example, the imaging unit 12031 is connected to the vehicle exterior information detection unit 12030. The vehicle exterior information detection unit 12030 causes the imaging unit 12031 to capture an image of the outside of the vehicle, and receives the captured image. The vehicle exterior information detection unit 12030 may perform object detection processing or distance detection processing on a person, a vehicle, an obstacle, a sign, a character on a road surface, or the like based on the received image.

The image pickup unit 12031 is an optical sensor that receives light and outputs an electric signal corresponding to the amount of the received light. The image pickup unit 12031 may output an electric signal as an image, or may output an electric signal as distance measurement information. Further, the light received by the image pickup unit 12031 may be visible light or may be invisible light such as infrared light.

The in-vehicle information detection unit 12040 detects information inside the vehicle. For example, a driver state detection unit 12041 that detects the state of the driver is connected to the in-vehicle information detection unit 12040. For example, the driver state detection unit 12041 includes a camera that captures the driver, and the in-vehicle information detection unit 12040 may calculate the degree of fatigue or the degree of concentration of the driver based on the detection information input from the driver state detection unit 12041, or may determine whether the driver is dozing.

The microcomputer 12051 calculates a control target value of the driving force generation device, the steering mechanism, or the brake device based on the information on the inside and outside of the vehicle obtained by the vehicle exterior information detection unit 12030 or the vehicle interior information detection unit 12040, and outputs a control instruction to the drive system control unit 12010. For example, the microcomputer 12051 may execute cooperative control to realize functions of an Advanced Driver Assistance System (ADAS) including collision avoidance or impact mitigation of the vehicle, following travel based on an inter-vehicle distance, vehicle speed keeping travel, collision warning of the vehicle, lane departure warning of the vehicle, and the like.

Further, the microcomputer 12051 controls the driving force generation device, the steering mechanism, the brake device, and the like based on the information around the vehicle obtained by the outside-vehicle information detection unit 12030 or the inside-vehicle information detection unit 12040, thereby executing cooperative control such as automatic driving intended to realize autonomous running of the vehicle independent of the operation of the driver.

Further, the microcomputer 12051 can output a control command to the vehicle body system control unit 12020 based on the information outside the vehicle obtained by the vehicle exterior information detecting unit 12030. For example, the microcomputer 12051 may perform cooperative control of preventing glare by controlling headlights, for example, switching a high beam to a low beam or the like, according to the position of the preceding vehicle or the oncoming vehicle detected by the vehicle exterior information detecting unit 12030.

The sound-image output unit 12052 transmits an output signal of at least one of sound and image to an output device capable of visually and aurally notifying information to a passenger of the vehicle or the outside of the vehicle. In the example of fig. 77, as output devices, an audio speaker 12061, a display unit 12062, and an instrument panel 12063 are shown. For example, the display unit 12062 may include at least one of an in-vehicle display and a flat-view display.

Fig. 78 is a diagram showing an example of the mounting position of the imaging unit 12031.

In fig. 78, as the image pickup unit 12031, image pickup units 12101, 12102, 12103, 12104, and 12105 are included.

For example, the camera units 12101, 12102, 12103, 12104, and 12105 are provided at positions such as a front nose, a side view mirror, a rear bumper or a rear door of the vehicle 12100, and an upper portion of an interior windshield. The camera unit 12101 provided on the nose and the camera unit 12105 provided on the upper portion of the windshield in the vehicle mainly acquire a front image of the vehicle 12100. The imaging units 12102 and 12103 provided on the side mirrors mainly acquire images of the side of the vehicle 12100. The imaging unit 12104 provided on the rear bumper or the rear door mainly acquires an image of the rear of the vehicle 12100. The camera unit 12105 provided on the upper portion of the windshield in the vehicle is mainly used for detecting a preceding vehicle, a pedestrian, an obstacle, a traffic signal, a traffic sign, a lane, or the like.

Note that fig. 78 shows an example of the imaging ranges of the imaging units 12101 to 12104. An imaging range 12111 represents an imaging range of the imaging unit 12101 provided on the nose, imaging ranges 12112 and 12113 represent imaging ranges of the imaging units 12102 and 12103 provided on the side mirrors, respectively, and an imaging range 12114 represents an imaging range of the imaging unit 12104 provided on the rear bumper or the rear door. For example, an overhead image of the vehicle 12100 viewed from above can be obtained by superimposing the image data captured by the imaging units 12101 to 12104.

At least one of the image sensing units 12101 to 12104 may have a function of obtaining distance information. For example, at least one of the image pickup units 12101 to 12104 may be a stereo camera including a plurality of image pickup elements, or may be an image pickup element having pixels for phase difference detection.

For example, the microcomputer 12051 obtains the distance to each three-dimensional object in the imaging ranges 12111 to 12114 and the temporal change in the distance (relative speed to the vehicle 12100) based on the distance information obtained from the imaging units 12101 to 12104, thereby extracting a three-dimensional object (particularly, the closest three-dimensional object) traveling in substantially the same direction as the vehicle 12100 at a predetermined speed (for example, equal to or greater than 0km/h) on the traveling path of the vehicle 12100 as a preceding vehicle. Further, the microcomputer 12051 may set in advance a vehicle-to-vehicle distance to be secured from the preceding vehicle, and execute automatic braking control (including following stop control), automatic acceleration control (including following start control), and the like. As described above, cooperative control of automatic driving or the like aimed at achieving autonomous running of the vehicle independent of the operation of the driver can be performed.

For example, the microcomputer 12051 may classify three-dimensional object data on a three-dimensional object into two-wheeled vehicles, general vehicles, large vehicles, pedestrians, and other three-dimensional objects such as utility poles based on the distance information obtained from the image sensing units 12101 to 12104, extract the three-dimensional object data, and may use the three-dimensional object data for automatically avoiding an obstacle. For example, the microcomputer 12051 distinguishes obstacles around the vehicle 12100 into obstacles visually recognizable by the driver of the vehicle 12100 and obstacles visually unrecognizable by the driver. Then, the microcomputer 12051 determines a collision risk indicating a risk of collision with each obstacle, and in the case where the collision risk is a set value or higher and there is a possibility of collision, the microcomputer 12051 may perform driving assistance of collision avoidance by outputting a warning to the driver via the audio speaker 12061 or the display unit 12062 or by performing forced deceleration or avoidance steering by the drive system control unit 12010.

At least one of the image sensing units 12101 to 12104 may be an infrared camera that detects infrared rays. For example, the microcomputer 12051 can recognize a pedestrian by determining whether or not a pedestrian is present in the captured images of the image capturing units 12101 to 12104. Such identification of a pedestrian is performed, for example, by a process of extracting feature points in captured images of the imaging units 12101 to 12104 as infrared cameras and a process of performing pattern matching processing on a series of feature points representing the outline of an object and determining whether or not it is a pedestrian. For example, when the microcomputer 12051 determines that a pedestrian is present in the captured images of the image capturing units 12101 to 12104 and identifies a person, the sound image output unit 12052 controls the display unit 12062 to superimpose and display a square contour line for emphasizing the identified pedestrian. Further, the sound image output unit 12052 may cause the display unit 12062 to display an icon representing a pedestrian lamp at a desired position.

In the above, an example of a vehicle control system to which the technique according to the present disclosure can be applied has been described. The technique according to the present disclosure can be applied to the imaging unit 12031, the driver state detection unit 12041, and the like in the above-described configuration. Specifically, the ToF sensor 1 according to the present disclosure can be applied to the image pickup units 12101, 12102, 12103, 12104, 12105, and the like. As a result, the situation around the vehicle 12100 can be detected more accurately, and therefore, more accurate control in automatic driving or the like and more accurate grasping of the driver's state or the like can be achieved.

Although the embodiments of the present disclosure have been described above, the technical scope of the present disclosure is not limited to the above-described embodiments, and various changes can be made within a scope not departing from the gist of the present disclosure. In addition, the configurations of the different embodiments and modifications may be appropriately combined.

Further, the effects of the embodiments described in the present specification are merely examples and are not limited, and other effects may be provided.

Further, each of the above embodiments may be used alone, or may also be used in combination with other embodiments.

Note that the present technology can also have the following configuration.

(1) A solid-state image pickup device, comprising:

a pixel array section in which a plurality of pixels are arranged in a matrix, wherein,

each of the pixels includes:

a plurality of photoelectric conversion units each of which photoelectrically converts incident light to generate electric charges;

a floating diffusion region that accumulates charge;

a plurality of transfer circuits that transfer charges generated in each of the plurality of photoelectric conversion units to the floating diffusion region; and

a first transistor that causes a pixel signal having a voltage value corresponding to an amount of charge of the charges accumulated in the floating diffusion region to appear on a signal line.

(2) The solid-state image pickup device according to (1),

each of the plurality of pixels is arranged in a pixel region individually allocated on the first surface of the semiconductor substrate,

the plurality of transmission circuits includes:

a plurality of first transmission circuits arranged in point symmetry or line symmetry with respect to a center of the pixel region or with a straight line passing through the center as an axis; and

a plurality of second transmission circuits arranged in point symmetry or line symmetry with respect to the center or with the straight line as an axis; and is

Each of the photoelectric conversion units is provided one-to-one with respect to a combination of first and second transmission circuits arranged in a predetermined direction in the matrix-like arrangement.

(3) The solid-state image pickup device according to (2), wherein each of the transfer circuits includes a second transistor having a vertical structure reaching the photoelectric conversion unit arranged in the semiconductor substrate from the first surface of the semiconductor substrate.

(4) The solid-state image pickup device according to (3), wherein the second transistor has two of the vertical structures.

(5) The solid-state image pickup device according to any one of (2) to (4), further comprising:

a driving unit configured to drive transfer of the electric charges of the plurality of transfer circuits, wherein,

the driving unit drives the first transfer circuit and the second transfer circuit so that transfer timing of the electric charge via the first transfer circuit is different from transfer timing of the electric charge via the second transfer circuit.

(6) The solid-state image pickup device according to (5), wherein,

the drive unit

A first drive pulse having a first phase angle with respect to a pulse of a predetermined period and having the predetermined period is input to the first transmission circuit, and,

a second drive pulse shifted in phase by 180 ° from the first drive pulse is input to the second transfer circuit.

(7) The solid-state image pickup device according to (6), wherein,

the drive unit

The plurality of first transmission circuits are driven with the same phase, and

the plurality of second transmission circuits are driven with the same phase.

(8) The solid-state image pickup device according to (7), wherein the plurality of first transmission circuits and the plurality of second transmission circuits are arranged in point symmetry or line symmetry with respect to the center or with the straight line as an axis.

(9) The solid-state image pickup device according to (7) or (8), wherein,

the plurality of transmission circuits further include a plurality of third transmission circuits and a plurality of fourth transmission circuits, and

the drive unit

Inputting a third drive pulse shifted in phase by 90 ° with respect to the first drive pulse to each of the plurality of third transfer circuits, and driving the third drive pulse with the same phase, and,

a fourth drive pulse shifted in phase by 180 ° with respect to the third drive pulse is input to each of the plurality of fourth transmission circuits, and the fourth drive pulse is driven with the same phase.

(10) The solid-state image pickup device according to (9), wherein,

the first drive pulse has the first phase angle of 0 ° with respect to the pulse of the predetermined period,

the second drive pulse has a second phase angle of 180 deg. with respect to the predetermined periodic pulse,

the third drive pulse has a third phase angle of 90 DEG with respect to the pulse of the predetermined period, and

the fourth drive pulse has a fourth phase angle of 270 ° with respect to the predetermined periodic pulses.

(11) The solid-state image pickup device according to (9) or (10), wherein the plurality of first transfer circuits, the plurality of second transfer circuits, the plurality of third transfer circuits, and the plurality of fourth transfer circuits are arranged in point symmetry or line symmetry with respect to the center or with the straight line as an axis.

(12) The solid-state image pickup device according to any one of (9) to (11), wherein,

each of the transfer circuits includes a memory that holds the electric charge generated in the photoelectric conversion unit, and

the drive unit

Inputting a first drive pulse having a phase angle of 0 ° with respect to the pulse of the predetermined period and having the predetermined period to the plurality of first transfer circuits to accumulate the electric charges in the memory of each of the plurality of first transfer circuits,

inputting a second drive pulse having a phase angle of 180 ° with respect to the pulse of the predetermined period and having the predetermined period to the plurality of second transfer circuits to accumulate the electric charges in the memory of each of the plurality of second transfer circuits,

inputting a third drive pulse having a phase angle of 90 ° with respect to the pulse of the predetermined period and having the predetermined period to the plurality of third transfer circuits to accumulate the electric charges in the memory of each of the plurality of third transfer circuits, and

inputting a fourth drive pulse having a phase angle of 270 ° with respect to the pulse of the predetermined period and having the predetermined period to the plurality of fourth transmission circuits to accumulate the electric charges in the memory of each of the plurality of fourth transmission circuits.

(13) The solid-state image pickup device according to (12), wherein the memory is a MOS (metal-oxide-semiconductor) type memory.

(14) The solid-state image pickup device according to any one of (9) to (13), further comprising a signal processing unit that generates distance information based on a ratio of a difference between the electric charge transferred via the first transfer circuit and the electric charge transferred via the second transfer circuit to a difference between the electric charge transferred via the third transfer circuit and the electric charge transferred via the fourth transfer circuit.

(15) The solid-state image pickup device according to (2), wherein each of the pixels further includes a third transistor that drains the electric charge generated in the photoelectric conversion unit.

(16) The solid-state image pickup device according to (15), wherein the third transistor has a vertical structure reaching the photoelectric conversion unit arranged in the semiconductor substrate from the first surface of the semiconductor substrate.

(17) The solid-state image pickup device according to (16), wherein the third transistor has two of the vertical structures.

(18) The solid-state image pickup device according to (6), wherein,

the driving unit divides the charges generated in the respective photoelectric conversion units into a plurality of accumulation periods and transfers the divided charges to the floating diffusion region, and

the driving unit inverts a phase of each of the first driving pulse and the second driving pulse for each of the accumulation periods.

(19) The solid-state image pickup device according to (18), wherein,

each of the pixels further includes a third transistor that discharges electric charge generated in the photoelectric conversion unit,

the drive unit sets a non-accumulation period in which the electric charges generated in the respective photoelectric conversion units are not transferred to the floating diffusion region in the accumulation period, and

the driving unit discharges the electric charge generated in the photoelectric conversion unit via the third transistor in the non-accumulation period.

(20) The solid-state image pickup device according to any one of (2) to (19), further comprising a pixel separation section that is provided along a boundary portion of the pixel region and optically separates the adjacent pixels from each other.

(21) The solid-state image pickup device according to (20), wherein the pixel separation portion is provided in a trench penetrating the semiconductor substrate from the first surface to a second surface opposite to the first surface or reaching a middle portion of the semiconductor substrate from the first surface.

(22) The solid-state imaging device according to (20) or (21), wherein the pixel separation portion includes at least one of a dielectric containing silicon oxide as a main component and a metal having optical characteristics of reflecting near infrared rays.

(23) The solid-state image pickup device according to any one of (20) to (22), further comprising an element separation portion that is provided in at least a part between the plurality of photoelectric conversion elements in the pixel region and optically separates the adjacent photoelectric conversion elements from each other.

(24) The solid-state image pickup device according to (23), wherein the element separating portion is provided in a groove that penetrates the semiconductor substrate from the first surface to a second surface opposite to the first surface or reaches a middle portion of the semiconductor substrate from the first surface.

(25) The solid-state imaging device according to (23) or (24), wherein the element separating portion includes at least one of a dielectric containing silicon oxide as a main component and a metal having optical characteristics of reflecting near infrared rays.

(26) The solid-state image pickup device according to any one of (1) to (25), wherein a periodic concave-convex structure is provided on a light receiving surface of each of the photoelectric conversion units.

(27) The solid-state imaging device according to (26), wherein a period of the periodic uneven structure is 300nm (nanometers) or more.

(28) A ranging device, comprising:

a light receiving unit including a pixel array section in which a plurality of pixels are arranged in a matrix; and

a light emitting unit that emits pulse-like irradiation light of a predetermined period, wherein,

each of the pixels includes:

a plurality of photoelectric conversion units each of which photoelectrically converts incident light to generate electric charges;

a floating diffusion region that accumulates charge;

a plurality of transfer circuits that transfer charges generated in each of the plurality of photoelectric conversion units to the floating diffusion region; and

a first transistor that causes a pixel signal having a voltage value corresponding to an amount of charge of the charges accumulated in the floating diffusion region to appear on a signal line.

List of reference numerals

1 ToF sensor

11 control unit

13 light emitting unit

14 light receiving unit

15 calculation unit

19 external I/F

20,20-1 to 20-7,120-1,120-2,120-3,920 unit pixel

20A,20A 1-20A 4,20B,20B 1-20B 4,20C,20C1,20C2,20D1,20D2,120A,120B,120C,120D,920A,920B readout circuit

21,211-214 photodiode

22,221 to 224 OFG transistors

23A,23A 1-23A 4,23B,23B 1-23B 4,23C1,23C2,23D1,23D2 pass gate transistors

24A,24A 1-24A 4,24B,24B 1-24B 4,24C,24C1,24C2,24D,24D1,24D2 memory

25A,25A 1-25A 4,25B,25B 1-25B 4,25C,25C1,25C2,25D,25D1,25D2 pass transistors

26,26A,26B reset transistor

27,27A,27B floating diffusion region

28,28A,28B amplifying transistor

29,29A,29B select transistor

30 boundary part

31,33,34,35 pixel separation part

32 element separating part

40 semiconductor substrate

42 n-type semiconductor region

43 n-type semiconductor region

44 n + type semiconductor region

45 concave-convex structure

50 wiring layer

51 insulating film

52 wiring

61 insulating film

62 light-shielding film

63 planarizing film

64 on-chip lens

70 pixel region

71 to 74 division regions

80 host

90 object

100 solid-state image pickup device

101 pixel array unit

102 system control unit

103 vertical driving circuit

104-column processing circuit

105 horizontal driving circuit

106 signal processing unit

107 data storage unit

341,351 insulating film

342,352 light shielding part

LD pixel driving line

VGA, VGB, VGC, VGD drive line (drive pulse)

VSL, VSLA, VSLB vertical signal line

108页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:简化的最可能模式列表生成方案

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类