Light receiving element and electronic device

文档序号:1602606 发布日期:2020-01-07 浏览:18次 中文

阅读说明:本技术 光接收元件和电子设备 (Light receiving element and electronic device ) 是由 渡辺竜太 于 2019-03-01 设计创作,主要内容包括:本发明提供了一种光接收元件(201),其包括:片上透镜(62);布线层(91);以及半导体层(61)。所述半导体层设置在所述片上透镜和所述布线层之间。所述半导体层包括:第一电压施加单元(73-1),其上被施加第一电压;第二电压施加单元(73-2),其上被施加不同于所述第一电压的第二电压;第一电荷检测单元(71-1),其设置在所述第一电压施加单元附近;和第二电荷检测单元(71-2),其设置在所述第二电压施加单元附近。而且,所述布线层(91)包括反射抑制结构(211),所述反射抑制结构抑制与所述第一电荷检测单元(71-1)和所述第二电荷检测单元(71-2)对应的平面区域中的光反射。(The present invention provides a light receiving element (201) comprising: an on-chip lens (62); a wiring layer (91); and a semiconductor layer (61). The semiconductor layer is disposed between the on-chip lens and the wiring layer. The semiconductor layer includes: a first voltage applying unit (73-1) to which a first voltage is applied; a second voltage applying unit (73-2) to which a second voltage different from the first voltage is applied; a first charge detection unit (71-1) disposed in the vicinity of the first voltage application unit; and a second charge detection unit (71-2) disposed in the vicinity of the second voltage application unit. Further, the wiring layer (91) includes a reflection suppressing structure (211) that suppresses light reflection in a planar region corresponding to the first charge detecting unit (71-1) and the second charge detecting unit (71-2).)

1. A light receiving element comprising:

an on-chip lens;

a wiring layer; and

a semiconductor layer disposed between the on-chip lens and the wiring layer,

wherein the semiconductor layer includes:

a first voltage applying unit to which a first voltage is applied;

a second voltage applying unit to which a second voltage different from the first voltage is applied;

a first charge detection unit provided in the vicinity of the first voltage application unit; and

a second charge detection unit disposed near the second voltage application unit,

and the wiring layer includes a reflection suppressing structure that suppresses light reflection in a planar area corresponding to the first charge detecting unit and the second charge detecting unit.

2. The light receiving element according to claim 1, wherein

The reflection suppressing structure is a film including polycrystalline silicon.

3. The light receiving element according to claim 1, wherein

The reflection suppressing structure is a film including a nitride film.

4. The light receiving element according to claim 1, wherein

The reflection suppressing structure is a structure as follows: in this structure, a first reflection suppressing film containing polysilicon and a second reflection suppressing film containing a nitride film are stacked in the stacking direction of the wiring layers.

5. The light receiving element according to claim 1, wherein

The wiring layer includes at least one layer of wiring, the one layer of wiring including: a first voltage application wiring for supplying the first voltage; a second voltage applying wiring for supplying the second voltage; and a reflecting member, and

the reflection member is not formed in the planar area corresponding to the first charge detection unit and the second charge detection unit.

6. The light receiving element according to claim 5, wherein

The one-layer wiring including the first voltage-applying wiring, the second voltage-applying wiring, and the reflective member is a wiring closest to the semiconductor layer among the multilayer wirings.

7. The light receiving element according to claim 5, wherein

The reflective member is a metal film.

8. The light receiving element according to claim 1, wherein

The semiconductor layer further includes a first buried insulating film between the first voltage applying unit and the first charge detecting unit and between the second voltage applying unit and the second charge detecting unit.

9. The light receiving element according to claim 8, further comprising:

a light-shielding film within the first buried insulating film.

10. The light receiving element according to claim 8, wherein

The semiconductor layer further includes a second buried insulating film between the first charge detection unit and the second charge detection unit.

11. The light receiving element according to claim 10, further comprising:

a light-shielding film within the second buried insulating film between the first charge detecting unit and the second charge detecting unit.

12. An electronic device comprising a light receiving element, the light receiving element comprising:

an on-chip lens;

a wiring layer; and

a semiconductor layer disposed between the on-chip lens and the wiring layer,

wherein the semiconductor layer includes:

a first voltage applying unit to which a first voltage is applied;

a second voltage applying unit to which a second voltage different from the first voltage is applied;

a first charge detection unit provided in the vicinity of the first voltage application unit; and

a second charge detection unit disposed near the second voltage application unit,

and the wiring layer includes a reflection suppressing structure that suppresses light reflection in a planar area corresponding to the first charge detecting unit and the second charge detecting unit.

Technical Field

The present technology relates to a light receiving element and an electronic device, and more particularly, to a light receiving element and an electronic device capable of improving characteristics.

Cross Reference to Related Applications

The present application claims the benefit of japanese priority patent application JP 2018-049696 filed 3, 16, 2018, the entire content of which is hereby incorporated by reference.

Background

In the related art, a ranging system using an indirect ToF (time of flight) method is known. In such a ranging system, the following sensors are necessary: the sensor can distribute signal charges obtained by receiving light generated by reflecting active light irradiated using an LED (light emitting diode) or a laser at a high speed at a certain phase from a target object to another area.

Thus, for example, there has been proposed a technique of: in which a voltage is directly applied to a substrate of a sensor to generate a current in the substrate so that a wide area in the substrate can be modulated at high speed (for example, refer to patent document 1). Such sensors are also known as CAPD (current assisted photonic demodulator) sensors.

Reference list

Patent document

[ patent document 1] Japanese patent application laid-open No. JP 2011-

Disclosure of Invention

Technical problem to be solved

The above-described CAPD sensor is a surface illumination type sensor: in which wiring and the like are provided on the surface of the side where light from the outside of the substrate is received. In order to secure the photoelectric conversion region, it is preferable that no Photodiode (PD) (i.e., wiring or the like) that blocks the optical path of incident light is present on the light receiving surface side of the photoelectric conversion unit. However, in the surface-illuminated CAPD sensor, due to the structure, it is necessary to provide a wiring for extracting electric charges, various control lines, or a signal line on the light receiving surface side of the PD, and thus the photoelectric conversion area is limited. That is, a sufficient photoelectric conversion region cannot be secured, and characteristics such as pixel sensitivity may be lowered in some cases.

The present invention is expected to improve characteristics.

Technical scheme for solving technical problem

The light receiving element of the first embodiment of the present technology includes: on-chip lenses (on-chip lenses); a wiring layer; and a semiconductor layer disposed between the on-chip lens and the wiring layer. The semiconductor layer includes: a first voltage applying unit to which a first voltage is applied; a second voltage applying unit to which a second voltage different from the first voltage is applied; a first charge detection unit provided in the vicinity of the first voltage application unit; and a second charge detection unit disposed near the second voltage application unit. Also, the wiring layer includes a reflection suppressing structure that suppresses light reflection in a planar area corresponding to the first charge detecting unit and the second charge detecting unit.

An electronic device of a second embodiment of the present technology includes a light receiving element including: an on-chip lens; a wiring layer; and a semiconductor layer disposed between the on-chip lens and the wiring layer. The semiconductor layer includes: a first voltage applying unit to which a first voltage is applied; a second voltage applying unit to which a second voltage different from the first voltage is applied; a first charge detection unit provided in the vicinity of the first voltage application unit; and a second charge detection unit disposed near the second voltage application unit. Also, the wiring layer includes a reflection suppressing structure that suppresses light reflection in a planar area corresponding to the first charge detecting unit and the second charge detecting unit.

In the first and second embodiments of the present technology, an on-chip lens, a wiring layer, and a semiconductor layer provided between the on-chip lens and the wiring layer are provided. The semiconductor layer is provided with: a first voltage applying unit to which a first voltage is applied; a second voltage applying unit to which a second voltage different from the first voltage is applied; a first charge detection unit disposed in the vicinity of the first voltage application unit, and a second charge detection unit disposed in the vicinity of the second voltage application unit. A reflection suppressing structure for suppressing light reflection in a planar region corresponding to the first charge detecting unit and the second charge detecting unit is provided in the wiring layer.

The light receiving element and the electronic device may be separate apparatuses or may be modules incorporated in other apparatuses.

The invention has the advantages of

According to the first and second embodiments of the present technology, characteristics can be improved.

It is to be noted that the effects explained here are not necessarily restrictive, and any of the effects described in the present invention may be applicable.

Drawings

Fig. 1 is a block diagram illustrating a configuration example of a light receiving element.

Fig. 2 is a cross-sectional view of a pixel of the light receiving element of fig. 1.

Fig. 3 is a plan view illustrating an example of a planar shape of the signal extraction unit.

Fig. 4 is an equivalent circuit of a pixel.

Fig. 5 is a diagram for explaining an effect of the light receiving element of fig. 1.

Fig. 6 is a diagram for explaining an effect of the light receiving element of fig. 1.

Fig. 7 is a diagram for explaining an effect of the light receiving element of fig. 1.

Fig. 8 is a diagram for explaining an effect of the light receiving element of fig. 1.

Fig. 9 is a diagram for explaining an effect of the light receiving element of fig. 1.

Fig. 10 is a sectional view of a plurality of pixels of the light receiving element of fig. 1.

Fig. 11 is a sectional view of a plurality of pixels of the light receiving element of fig. 1.

Fig. 12 is a plan view of a metal film in a multilayer wiring layer.

Fig. 13 is a sectional view illustrating a pixel structure of a first embodiment of a pixel to which the present technique is applied.

Fig. 14 is a diagram for explaining the effect of the pixel structure of the first embodiment.

Fig. 15 is a sectional view illustrating a pixel structure of a second embodiment of a pixel to which the present technique is applied.

Fig. 16 is a sectional view illustrating a pixel structure of a third embodiment of a pixel to which the present technology is applied.

Fig. 17 is a sectional view illustrating a pixel structure of a fourth embodiment of a pixel to which the present technology is applied.

Fig. 18 is a sectional view illustrating a modification of the fourth embodiment.

Fig. 19 is a sectional view illustrating a pixel structure of a fifth embodiment of a pixel to which the present technology is applied.

Fig. 20 is a diagram for explaining the effect of the pixel structure of the fifth embodiment.

Fig. 21 is a sectional view illustrating a pixel structure of a sixth embodiment of a pixel to which the present technology is applied.

Fig. 22 is a diagram for explaining the effect of the pixel structure of the sixth embodiment.

Fig. 23 is a sectional view illustrating a pixel structure of a seventh embodiment of a pixel to which the present technology is applied.

Fig. 24 is a diagram for explaining the effect of the pixel structure of the seventh embodiment.

Fig. 25 is a sectional view illustrating a pixel structure of an eighth embodiment of a pixel to which the present technology is applied.

Fig. 26 is a sectional view illustrating a pixel structure of a ninth embodiment of a pixel to which the present technology is applied.

Fig. 27 is a sectional view illustrating a modification of the ninth embodiment.

Fig. 28 is a block diagram illustrating a configuration example of the ranging module.

Fig. 29 is a view showing an example of a schematic configuration of an endoscopic surgical system.

Fig. 30 is a block diagram showing an example of the functional configurations of a camera (camera head) and a Camera Control Unit (CCU).

Fig. 31 is a block diagram showing an example of a schematic configuration of a vehicle control system.

Fig. 32 is a view for assisting in explaining an example of the mounting positions of the vehicle exterior information detecting unit and the imaging unit.

Detailed Description

Hereinafter, embodiments (hereinafter, referred to as examples) for implementing the present technology will be explained. Note that the description will be made in the following order.

1. Examples of the basic configuration of the light receiving element

2. Enhancing the necessity of basic pixel structure

3. First embodiment of the pixel

4. Second embodiment of the pixel

5. Third embodiment of the pixel

6. Fourth embodiment of the pixel

7. Fifth embodiment of the pixel

8. Sixth embodiment of the pixel

9. Seventh embodiment of the pixel

10. Eighth embodiment of the pixel

11. Ninth embodiment of pixel

12. Summary of the invention

13. Construction example of distance measuring Module

14. Examples of applications of endoscopic surgical systems

15. Application example of moving body

<1. example of basic configuration of light receiving element >

The present technology relates to a light receiving element that functions as a back-surface illumination type CAPD sensor, and as a premise of the light receiving element of an embodiment to which the present technology is applied, a basic structure of the light receiving element will be described first.

< Block diagram >

Fig. 1 is a block diagram illustrating a configuration example of a light receiving element.

The light receiving element 1 shown in fig. 1 is an element functioning as a back surface illumination type CAPD sensor, and for example, the light receiving element 1 is used as a part of a ranging system that performs distance measurement by an indirect ToF method. For example, the ranging system can be applied to an in-vehicle system that is mounted on a vehicle and measures a distance from a target object outside the vehicle, a gesture recognition system that measures a distance from a target object such as a hand of a user and recognizes a gesture of the user based on the measurement result, and the like.

The light receiving element 1 has a pixel array unit 21 formed on a semiconductor substrate (not shown) and a peripheral circuit unit integrated with the pixel array unit 21 on the same semiconductor substrate. The peripheral circuit units include, for example, a vertical driving unit 22, a column processing unit 23, a horizontal driving unit 24, a system control unit 25, and the like.

The light receiving element 1 is also provided with a signal processing unit 26 and a data storage unit 27. Note that in this image pickup apparatus, the signal processing unit 26 and the data storage unit 27 may be mounted on the same substrate as the light receiving element 1, or may be provided on a substrate separate from the light receiving element 1.

The pixel array unit 21 has the following configuration: in which pixels are arranged two-dimensionally in a row direction and a column direction, that is, in a matrix shape, and each pixel generates an electric charge corresponding to the amount of received light and outputs a signal corresponding to the electric charge. That is, the pixel array unit 21 has a plurality of pixels that perform photoelectric conversion on incident light and output signals corresponding to electric charges obtained as a result.

Here, the row direction refers to an arrangement direction of pixels in a pixel row (i.e., a horizontal direction), and the column direction refers to an arrangement direction of pixels in a pixel column (i.e., a vertical direction). That is, the row direction is the horizontal direction in the drawing, and the column direction is the vertical direction in the drawing.

In the pixel array unit 21, for a matrix-shaped pixel array, one pixel driving line 28 is wired in the row direction for each pixel row, and two vertical signal lines 29 are wired in the column direction for each pixel column. For example, the pixel driving line 28 transmits a driving signal for driving when reading a signal from a pixel. It is to be noted that although fig. 1 shows one wiring for the pixel drive line 28, the pixel drive line 28 is not limited to one. One end of the pixel driving line 28 is connected to an output end of the vertical driving unit 22 corresponding to each row.

The vertical driving unit 22 includes a shift register, an address decoder, or the like. The vertical driving unit 22 drives each of the pixels of all the pixels of the pixel array unit 21 simultaneously or in units of rows or the like. That is, the vertical driving unit 22 includes a driving unit that controls the operation of each pixel of the pixel array unit 21 together with the system control unit 25 for controlling the vertical driving unit 22.

It is to be noted that in the distance measurement using the indirect ToF method, the number of elements (CAPD elements) connected to one control line and driven at high speed affects controllability of high-speed driving and accuracy of driving. The light receiving element for distance measurement of the indirect ToF method may include a pixel array elongated in the horizontal direction. Therefore, in this case, as the control line of the element which is driven at high speed, the vertical signal line 29 or another control line elongated in the vertical direction may be used. In this case, for example, a plurality of pixels arrayed in the vertical direction are connected to the vertical signal line 29 or the other control line elongated in the vertical direction, and the pixels, that is, the CAPD sensor are driven by a driving unit provided separately from the vertical driving unit 22, the horizontal driving unit 24, or the like through such vertical signal line 29 or the other control line.

Signals output from the respective pixels of the pixel row in response to drive control by the vertical drive unit 22 are input to the column processing unit 23 through the vertical signal line 29. The column processing unit 23 performs predetermined signal processing on signals output from the respective pixels through the vertical signal lines 29 and temporarily holds the pixel signals subjected to the signal processing. Specifically, as the above-described signal processing, the column processing unit 23 performs noise cancellation processing, analog to digital (AD) conversion processing, and the like.

The horizontal driving unit 24 includes a shift register, an address decoder, or the like, and sequentially selects unit circuits of the column processing unit 23 corresponding to pixel columns. The column processing unit 23 sequentially outputs pixel signals of the respective unit circuits obtained through signal processing by selective scanning of the horizontal driving unit 24.

The system control unit 25 includes a timing generator or the like that generates various timing signals and performs drive control on the vertical driving unit 22, the column processing unit 23, the horizontal driving unit 24, and the like based on the generated various timing signals.

The signal processing unit 26 has at least a calculation processing function, and performs various kinds of signal processing such as calculation processing based on the pixel signals output from the column processing unit 23. The data storage unit 27 temporarily stores data necessary for signal processing in the signal processing unit 26.

The light receiving element 1 can be configured as described above.

< example of sectional configuration of pixel >

Next, a configuration example of the pixels provided in the pixel array unit 21 will be explained. For example, the pixels provided in the pixel array unit 21 are configured as shown in fig. 2.

Fig. 2 illustrates a cross-sectional view of one pixel 51 provided in the pixel array unit 21. The pixel 51 receives light incident from the outside, particularly infrared light, and the pixel 51 performs photoelectric conversion on the incident light and outputs a signal corresponding to the electric charge obtained as a result.

For example, the pixel 51 has a semiconductor substrate 61 and an on-chip lens 62 formed on the semiconductor substrate 61, and the semiconductor substrate 61 includes a P-type semiconductor layer, i.e., a silicon substrate, for example.

For example, the thickness of the semiconductor substrate 61 in the vertical direction in the drawing, that is, the thickness in the direction perpendicular to the surface of the semiconductor substrate 61 is set to 20 μm or less. Note that the thickness of the semiconductor substrate 61 may of course be 20 μm or more, and this is sufficient as long as the thickness of the semiconductor substrate 61 is determined in accordance with the target characteristics of the light receiving element 1 or the like.

Further, the semiconductor substrate 61 is a P-Epi substrate or the like, which has a high resistance, e.g., 1E +13cm3The substrate concentration below, and the resistance (resistivity) of the semiconductor substrate 61 is set to, for example, 500 Ω cm or more.

Here, the relationship between the substrate concentration and the resistance of the semiconductor substrate 61 is, for example: when the concentration of the substrate is 6.48E +12cm3When the resistance is 2000 omega cm; when the concentration of the substrate is 1.30E +13cm3When the resistance is 1000 omega cm; when the concentration of the substrate is 2.59E +13cm3When the resistance is 500 omega cm; when the concentration of the substrate is 1.30E +14cm3The resistance is 100. omega. cm, and so on.

The on-chip lens 62 is formed on an upper surface of the semiconductor substrate 61 in the drawing, that is, a surface of the semiconductor substrate 61 on a side where light is incident from the outside (hereinafter, also referred to as a light incident surface), and the on-chip lens 62 condenses the light incident from the outside and causes the light to enter the inside of the semiconductor substrate 61.

Further, on the light incident surface of the semiconductor substrate 61, an inter-pixel light-shielding film 63 for preventing color mixing between adjacent pixels is formed at the boundary portion of the pixels 51. The interpixel light-shielding film 63 prevents light incident on a pixel 51 from being incident on another pixel 51 disposed adjacent to the pixel 51.

On the surface side of the semiconductor substrate 61 opposite to the light incident surface, that is, in a part of the inner side of the lower side surface in the drawing, a signal extraction unit 65-1 and a signal extraction unit 65-2 called a tap (tap) are formed.

The signal extracting unit 65-1 has an N + semiconductor region 71-1 and an N-conductor region 72-1 as N-type semiconductor regions, and a P + semiconductor region 73-1 and a P-semiconductor region 74-1 as P-type semiconductor regions, wherein the N-conductor region 72-1 has a donor impurity concentration lower than that of the N + semiconductor region 71-1, and wherein the P-semiconductor region 74-1 has an acceptor impurity concentration lower than that of the P + semiconductor region 73-1. Here, examples of the donor impurity for Si may include an element belonging to group 5 of the periodic table, such As phosphorus (P) or arsenic (As), and examples of the acceptor impurity for Si may include an element belonging to group 3 of the periodic table, such As boron (B). An element serving as donor impurity is referred to as a donor element, and an element serving as acceptor impurity is referred to as an acceptor element.

The N-semiconductor region 72-1 is formed on the upper side of the N + semiconductor region 71-1, and covers (surrounds) the N + semiconductor region 71-1. Similarly, a P-semiconductor region 74-1 is formed on the upper side of the P + semiconductor region 73-1, and covers (surrounds) the P + semiconductor region 73-1.

In plan view, as will be described later with reference to fig. 3, the N + semiconductor region 71-1 is formed so as to surround the periphery of the P + semiconductor region 73-1 with the P + semiconductor region 73-1 as the center. Similarly, the N-semiconductor region 72-1 formed on the upper side of the N + semiconductor region 71-1 is also formed to surround the periphery of the P-semiconductor region 74-1 with the P-semiconductor region 74-1 as the center.

Similarly, the signal extraction unit 65-2 has an N + semiconductor region 71-2 and an N-semiconductor region 72-2 as N-type semiconductor regions, and a P + semiconductor region 73-2 and a P-semiconductor region 74-2 as P-type semiconductor regions, wherein the N-semiconductor region 72-2 has a donor impurity concentration lower than that of the N + semiconductor region 71-2, and wherein the P-semiconductor region 74-2 has an acceptor impurity concentration lower than that of the P + semiconductor region 73-2.

The N-semiconductor region 72-2 is formed on the upper side of the N + semiconductor region 71-2, and covers (surrounds) the N + semiconductor region 71-2. Similarly, a P-semiconductor region 74-2 is formed on the upper side of the P + semiconductor region 73-2 and covers (surrounds) the P + semiconductor region 73-2.

In plan view, as will be described later with reference to fig. 3, the N + semiconductor region 71-2 is formed so as to surround the periphery of the P + semiconductor region 73-2 with the P + semiconductor region 73-2 as the center. Similarly, the N-semiconductor region 72-2 formed on the upper side of the N + semiconductor region 71-2 is also formed to surround the periphery of the P-semiconductor region 74-2 with the P-semiconductor region 74-2 as the center.

Hereinafter, the signal extraction unit 65-1 and the signal extraction unit 65-2 are also simply referred to as the signal extraction unit 65 without particularly distinguishing the signal extraction unit 65-1 and the signal extraction unit 65-2.

Further, hereinafter, the N + semiconductor region 71-1 and the N + semiconductor region 71-2 are also simply referred to as the N + semiconductor region 71 without particularly distinguishing the N + semiconductor region 71-1 from the N + semiconductor region 71-2, and the N-semiconductor region 72-1 and the N-semiconductor region 72-2 are also simply referred to as the N-semiconductor region 72 without particularly distinguishing the N-semiconductor region 72-1 from the N-semiconductor region 72-2.

Further, hereinafter, the P + semiconductor region 73-1 and the P + semiconductor region 73-2 are also simply referred to as the P + semiconductor region 73 in the case where it is not necessary to particularly distinguish the P + semiconductor region 73-1 from the P + semiconductor region 73-2, and the P-semiconductor region 74-1 and the P-semiconductor region 74-2 are also simply referred to as the P-semiconductor region 74 in the case where it is not necessary to particularly distinguish the P-semiconductor region 74-1 from the P-semiconductor region 74-2.

On the interface on the light incidence surface side of the semiconductor substrate 61, a P + semiconductor region 75 covering the entire light incidence surface by stacking films having positive fixed charges is formed.

On the other hand, on the side of the semiconductor substrate 61 opposite to the light incident surface side on which the on-chip lenses 62 are formed for the respective pixels, a multilayer wiring layer 91 is formed. In other words, the semiconductor substrate 61 as a semiconductor layer is disposed between the on-chip lens 62 and the multilayer wiring layer 91. The multilayer wiring layer 91 includes five metal films M1 to M5 and an interlayer insulating film 92 between these metal films. It is to be noted that, among the five metal films M1 to M5 of the multilayer wiring layer 91, although the outermost metal film M5 is not shown in fig. 2 because the outermost metal film M5 is located at an invisible position, the outermost metal film M5 is shown in fig. 11 to be described later.

The metal film M1 closest to the semiconductor substrate 61 among the five metal films M1 to M5 of the multilayer wiring layer 91 is provided with a voltage application wiring 93 and a reflection member 94, the voltage application wiring 93 being for applying a predetermined voltage to the P + semiconductor region 73-1 or 73-2, the reflection member 94 being a member that reflects incident light.

Therefore, the light receiving element 1 of fig. 1 is a back surface illumination type CAPD sensor: among them, the light incident surface of the semiconductor substrate 61 is a so-called rear surface located at the side opposite to the multilayer wiring layer 91 side.

The N + semiconductor region 71 provided on the semiconductor substrate 61 functions as a charge detection unit for detecting the light amount of light incident on the pixels 51 from the outside, that is, the amount of signal carriers generated by photoelectric conversion of the semiconductor substrate 61. It is to be noted that, in addition to the N + semiconductor region 71 serving as the charge detection unit, the N-semiconductor region 72 having a low donor impurity concentration can also be regarded as the charge detection unit.

Further, the P + semiconductor region 73 functions as a voltage application unit for injecting a majority carrier current into the semiconductor substrate 61, that is, directly applying a voltage to the semiconductor substrate 61 to generate an electric field in the semiconductor substrate 61. It is to be noted that, in addition to the P + semiconductor region 73 serving as a voltage application unit, the P-semiconductor region 74 having a low acceptor impurity concentration can also be regarded as a voltage application unit.

Fig. 3 is a plan view illustrating an example of the planar shape of the signal extraction unit 65 in the pixel 51.

In a plan view, the signal extraction unit 65 includes a P + semiconductor region 73 as a voltage application unit and an N + semiconductor region 71 as a charge detection unit, in which the P + semiconductor region 73 is disposed at the center, and the N + semiconductor region 71 is disposed so as to surround the periphery of the P + semiconductor region 73. It is to be noted that although the outer shapes of the N + semiconductor region 71 and the P + semiconductor region 73 are octagonal shapes in fig. 3, other planar shapes such as a square shape, a rectangular shape, or a circular shape may be used.

Further, in the pixel 51, the signal extraction units 65-1 and 65-2 are disposed at symmetrical positions that are symmetrical with respect to the pixel center.

A line a-a 'shown in fig. 3 shows a cross-sectional cut line of fig. 2 and fig. 10 described later, and a line B-B' shows a cross-sectional cut line of fig. 11 described later.

< example of equivalent Circuit configuration of Pixel >

Fig. 4 illustrates an equivalent circuit of the pixel 51.

For the signal extraction unit 65-1 including the N + semiconductor region 71-1 and the P + semiconductor region 73-1 and the like, the pixel 51 includes the transfer transistor 101A, FD102A, the additional capacitor 103A, the switching transistor 104A, the reset transistor 105A, the amplification transistor 106A, and the selection transistor 107A.

Further, for the signal extraction unit 65-2 including the N + semiconductor region 71-2 and the P + semiconductor region 73-2 and the like, the pixel 51 includes the transfer transistor 101B, FD102B, the additional capacitor 103B, the switching transistor 104B, the reset transistor 105B, the amplification transistor 106B, and the selection transistor 107B.

The vertical driving unit 22 applies a predetermined voltage MIX0 (a first voltage) to the P + semiconductor region 73-1 and applies a predetermined voltage MIX1 (a second voltage) to the P + semiconductor region 73-2. For example, one of the voltages MIX0 and MIX1 is 1.5V and the other is 0V. The P + semiconductor regions 73-1 and 73-2 are voltage applying portions to which the first voltage or the second voltage is applied.

The N + semiconductor regions 71-1 and 71-2 are charge detection units that detect and accumulate charges generated by photoelectric conversion of light incident on the semiconductor substrate 61.

In the case where the state of the drive signal TRG supplied to the gate electrode is changed to an active state, the state of the transfer transistor 101A is changed to an on state in response to the drive signal TRG, and thus the electric charge accumulated in the N + semiconductor region 71-1 is transferred to the FD 102A. In the case where the state of the drive signal TRG supplied to the gate electrode is changed to the active state, the state of the transfer transistor 101B is changed to the on state in response to the drive signal TRG, and thus the electric charge accumulated in the N + semiconductor region 71-2 is transferred to the FD 102B.

The FD102A temporarily holds the charge supplied from the N + semiconductor region 71-1. The FD102B temporarily holds the charge supplied from the N + semiconductor region 71-2.

In the case where the state of the drive signal FDG supplied to the gate electrode is changed to an active state, the state of the switching transistor 104A is changed to a conductive state in response to the drive signal FDG, and thus the additional capacitor 103A is connected to the FD 102A. In the case where the state of the drive signal FDG supplied to the gate electrode is changed to an active state, the state of the switching transistor 104B is changed to a conductive state in response to the drive signal FDG, and thus the additional capacitor 103B is connected to the FD 102B.

For example, at the time of high illuminance at which the light amount of incident light is large, the vertical drive unit 22 causes the switching transistors 104A and 104B to be in an active state to connect the additional capacitor 103A and the FD102A to each other and to connect the additional capacitor 103B and the FD102B to each other. Therefore, more charges can be accumulated at the time of high illuminance.

On the other hand, at the low illuminance timing when the light amount of incident light is small, the vertical drive unit 22 causes the switching transistors 104A and 104B to be in an inactive (inactive) state, thereby separating the additional capacitors 103A and 103B from the FD102A and the FD102B, respectively.

In the case where the state of the drive signal RST supplied to the gate electrode is changed to an active state, the state of the reset transistor 105A is changed to an on state in response to the drive signal RST, and thus the potential of the FD102A is reset to a predetermined level (reset voltage VDD). In the case where the state of the drive signal RST supplied to the gate electrode is changed to an active state, the state of the reset transistor 105B is changed to a conductive state in response to the drive signal RST, and thus the potential of the FD102B is reset to a predetermined level (reset voltage VDD). Note that when the states of the reset transistors 105A and 105B are changed to the active states, the states of the transfer transistors 101A and 101B are also changed to the active states at the same time.

The source electrode of the amplification transistor 106A is connected to the vertical signal line 29A through the selection transistor 107A, and therefore the amplification transistor 106A constitutes a load MOS and a source follower circuit of the constant current source circuit section 108A connected to one end of the vertical signal line 29A. The source electrode of the amplification transistor 106B is connected to the vertical signal line 29B through the selection transistor 107B, and therefore the amplification transistor 106B constitutes a load MOS and a source follower circuit of the constant current source circuit section 108B connected to one end of the vertical signal line 29B.

The selection transistor 107A is connected between the source electrode of the amplification transistor 106A and the vertical signal line 29A. In the case where the state of the selection signal SEL supplied to the gate electrode is changed to an active state, the state of the selection transistor 107A is changed to an on state in response to the selection signal SEL, and thus the pixel signal output from the amplification transistor 106A is output to the vertical signal line 29A.

The selection transistor 107B is connected between the source electrode of the amplification transistor 106B and the vertical signal line 29B. In the case where the state of the selection signal SEL supplied to the gate electrode is changed to an active state, the state of the selection transistor 107B is changed to an on state in response to the selection signal SEL, and thus the pixel signal output from the amplification transistor 106B is output to the vertical signal line 29B.

For example, the transfer transistors 101A and 101B, the reset transistors 105A and 105B, the amplification transistors 106A and 106B, and the selection transistors 107A and 107B of the pixel 51 are controlled by the vertical driving unit 22.

In the equivalent circuit shown in fig. 4, the additional capacitors 103A and 103B and the switching transistors 104A and 104B that control the connection of the additional capacitors 103A and 103B may be omitted, however, a high dynamic range may be ensured by providing the additional capacitor 103 and selectively using the additional capacitor 103 according to the light amount of incident light.

< operation of detecting electric charge of pixel >

Referring again to fig. 2, the detection operation of the pixel 51 will be explained.

For example, in the case of attempting to measure the distance from an object by the indirect ToF method, infrared light is emitted toward the object from an image pickup device provided with the light receiving element 1. Further, in the case where the infrared light is reflected by the object and returned to the image pickup apparatus as reflected light, the light receiving element 1 receives the reflected light (infrared light) incident on the light receiving element 1 and performs photoelectric conversion.

At this time, the vertical driving unit 22 drives the pixel 51 to distribute the electric charges obtained by photoelectric conversion to the FD102A connected to the N + semiconductor region 71-1 as one charge detecting unit (first charge detecting unit) and the FD102B connected to the N + semiconductor region 71-2 as the other charge detecting unit (second charge detecting unit).

More specifically, at a certain timing, the vertical driving unit 22 applies a predetermined voltage to the two P + semiconductor regions 73 through the voltage application wiring 93 and the like. For example, the vertical driving unit 22 applies a voltage of 1.5V to the P + semiconductor region 73-1 and a voltage of 0V to the P + semiconductor region 73-2.

Then, an electric field is generated between the two P + semiconductor regions 73 in the semiconductor substrate 61, and a current flows from the P + semiconductor region 73-1 to the P + semiconductor region 73-2. In this case, holes in the semiconductor substrate 61 move toward the P + semiconductor region 73-2, and electrons move toward the P + semiconductor region 73-1.

Therefore, in this state, in the case where infrared light (reflected light) from the outside enters the inside of the semiconductor substrate 61 through the on-chip lens 62 and photoelectrically converts the infrared light in the semiconductor substrate 61 to convert the infrared light into electron and hole pairs, the obtained electrons are guided to the P + semiconductor region 73-1 by the electric field between the P + semiconductor regions 73 and move to the N + semiconductor region 71-1.

In this case, electrons generated by photoelectric conversion are used as signal carriers for detecting a signal corresponding to the amount of infrared light incident on the pixel 51 (i.e., the received-light amount of infrared light).

Accordingly, in the N + semiconductor region 71-1, the electric charge corresponding to the electrons that have moved to the N + semiconductor region 71-1 is detected and accumulated in the FD 102A. With the switching transistor 104A in an active state, this charge is also accumulated in the additional capacitor 103A. When the pixel 51 is selected, a signal corresponding to the electric charge is output to the column processing unit 23 through the vertical signal line 29A or the like.

Further, with respect to the read signal, processing such as AD conversion processing is carried out in the column processing unit 23, and the pixel signal obtained as a result is supplied to the signal processing unit 26. The pixel signal is a signal indicating the amount of charge detected in the N + semiconductor region 71-1, in other words, a signal indicating the amount of infrared light received by the pixel 51.

It is to be noted that at this time, similarly to the case of the N + semiconductor region 71-1, a pixel signal corresponding to the electric charge detected in the N + semiconductor region 71-2 can also be used for distance measurement as appropriate.

Further, at the next timing, a voltage is applied to the two P + semiconductor regions 73 by the vertical driving unit 22, so that an electric field in a direction opposite to that of the electric field generated in the semiconductor substrate 61 before that point in time is generated. Specifically, for example, a voltage of 0V is applied to the P + semiconductor region 73-1, and a voltage of 1.5V is applied to the P + semiconductor region 73-2.

Accordingly, an electric field is generated between the two P + semiconductor regions 73 in the semiconductor substrate 61, and a current flows from the P + semiconductor region 73-2 to the P + semiconductor region 73-1.

In this state, in the case where infrared light (reflected light) from the outside enters the inside of the semiconductor substrate 61 through the on-chip lens 62 and photoelectrically converts the infrared light in the semiconductor substrate 61 to an electron and hole pair, the obtained electron is guided to the direction of the P + semiconductor region 73-2 by the electric field between the P + semiconductor regions 73 and moves to the N + semiconductor region 71-2.

Accordingly, in the N + semiconductor region 71-2, the electric charge corresponding to the electrons that have moved to the N + semiconductor region 71-2 is detected and accumulated in the FD 102B. With the switching transistor 104B in an active state, this charge is also accumulated in the additional capacitor 103B. When the pixel 51 is selected, a signal corresponding to the electric charge is output to the column processing unit 23 through the vertical signal line 29B or the like.

Further, with respect to the read signal, processing such as AD conversion processing is carried out in the column processing unit 23, and the pixel signal obtained as a result is supplied to the signal processing unit 26. The pixel signal is a signal indicating the amount of charge detected in the N + semiconductor region 71-2, in other words, a signal indicating the amount of infrared light received by the pixel 51.

It is to be noted that, at this time, similarly to the case of the N + semiconductor region 71-2, a pixel signal corresponding to electrons detected in the N + semiconductor region 71-1 can also be used for distance measurement as appropriate.

As described above, in the case where pixel signals obtained by photoelectric conversion of mutually different periods in the same pixel 51 are obtained, the signal processing unit 26 calculates distance information indicating the distance to the object based on these pixel signals, and outputs the distance information to the subsequent stage.

The method of allocating signal carriers to the N + semiconductor regions 71 different from each other and calculating distance information based on signals corresponding to the signal carriers as described above is called an indirect ToF method.

Here, the signal extraction unit 65 in which reading of a signal corresponding to the electric charge (electrons) obtained by photoelectric conversion is to be performed (i.e., the signal extraction unit 65 in which the electric charge obtained by photoelectric conversion is to be detected) will be referred to as an active tap (active tap).

In contrast, basically, the signal extraction unit 65 in which reading of a signal corresponding to the electric charge obtained by photoelectric conversion is not performed (i.e., the signal extraction unit 65 that is not an active tap) will be referred to as an inactive tap (inactive tap).

In the above example, the signal extraction unit 65 in which the voltage of 1.5V is applied to the P + semiconductor region 73 is an active tap, and the signal extraction unit 65 in which the voltage of 0V is applied to the P + semiconductor region 73 is an inactive tap.

In a CAPD sensor, there is a value called the contrast between the valid and invalid taps (Cmod), which is an indicator of the accuracy of the distance measurement. Cmod is calculated by the following equation (1). In the mathematical formula (1), I0 is a signal detected in one of the two charge detecting cells (P + semiconductor regions 73), and I1 is a signal detected in the other.

Cmod={|I0-I1|/(I0+I1)}×100…(1)

Cmod denotes: of the electric charges generated by photoelectric conversion of incident infrared light, what percentage of the electric charges can be detected in the N + semiconductor region 71 of the signal extraction unit 65 as an effective tap, that is, it is an index indicating whether a signal corresponding to the electric charges can be extracted, and Cmod indicates the electric charge separation efficiency.

For example, in the case where infrared light incident from the outside is incident on a region of an invalid tap and photoelectric conversion is performed in the invalid tap, it is highly likely that electrons generated by the photoelectric conversion as signal carriers will move to the N + semiconductor region 71 in the invalid tap. Therefore, the electric charge of a part of the electrons obtained by the photoelectric conversion is not detected in the N + semiconductor region 71 of the effective tap, and Cmod (i.e., charge separation efficiency) is lowered.

Therefore, in the pixel 51, the infrared light is condensed to the vicinity of the central portion of the pixel 51 located at a position substantially the same distance from the two signal extraction units 65, so that the possibility of performing photoelectric conversion on the infrared light incident from the outside in the region of the invalid tap is reduced. Therefore, the charge separation efficiency is improved. In addition, in the pixel 51, the modulation contrast (modulation contrast) can also be improved. In other words, electrons obtained by photoelectric conversion can be easily guided to the N + semiconductor region 71 in the effective tap.

< effects of the light receiving element 1 >

According to the light receiving element 1 described above, the following effects can be obtained.

That is, first, since the light receiving element 1 is of a rear surface illumination type, Quantum Efficiency (QE) × aperture ratio (fill factor) can be maximized, and distance measurement characteristics can be improved by the light receiving element 1.

For example, as shown by an arrow W11 in fig. 5, a general surface illumination type image sensor (CIS) has the following structure: here, the wiring 112 or the wiring 113 is formed on the light incident surface side (on which light from the outside of the PD 111 as the photoelectric conversion unit is incident).

Therefore, for example, as shown by an arrow a21 or an arrow a22, some light obliquely incident toward the PD 111 at an angle may be blocked by the wiring 112 or the wiring 113 and thus may not be incident on the PD 111.

On the other hand, for example, as indicated by an arrow W12, the back surface illumination type image sensor (CIS) has the following structure: among them, the wiring 115 or the wiring 116 is formed on the upper surface on the side opposite to the light incidence surface side on which light from the outside of the PD 114 as the photoelectric conversion unit is incident.

Therefore, a sufficient aperture ratio can be ensured compared to the case of the front surface illumination type. That is, for example, as shown by an arrow a23 or an arrow a24, light obliquely incident toward the PD 114 at an angle may be incident on the PD 114 without being shielded by the wiring. Therefore, by receiving more light, the sensitivity of the pixel can be improved.

The effect of improving the pixel sensitivity obtained by such a back-surface illumination type can also be obtained in the light receiving element 1 as a back-surface illumination type CAPD sensor.

Further, for example, in the front surface illumination type CAPD sensor, as indicated by an arrow W13, a signal extraction unit 122 (more specifically, a P + semiconductor region or an N + semiconductor region of the tap) called a tap is formed on a light incident surface side of the inside of the PD121 as a photoelectric conversion unit (on which light from the outside is incident). In addition, the front surface illumination type CAPD sensor has the following structure: here, a wiring 123 or a wiring 124 such as a contact and a metal connected to the signal extraction unit 122 is formed on the light incident surface side.

Therefore, for example, as shown by an arrow a25 or an arrow a26, some light obliquely incident toward the PD121 at an angle may be blocked by the wiring 123 or the like, and thus may not be incident on the PD 121. Further, as shown by an arrow a27, light incident perpendicularly to the PD121 may also be blocked by the wiring 124, and may not be incident on the PD 121.

On the other hand, for example, as indicated by an arrow W14, the back surface illumination CAPD sensor has the following configuration: among them, the signal extraction unit 126 is formed in a portion on the opposite side to the light incident surface on which light from the outside of the PD 125 as the photoelectric conversion unit is incident. Further, a wiring 127 or a wiring 128 such as a contact and a metal connected to the signal extraction unit 126 is formed on the upper surface of the PD 125 on the side opposite to the light incident surface.

Here, the PD 125 corresponds to the semiconductor substrate 61 shown in fig. 2, and the signal extraction unit 126 corresponds to the signal extraction unit 65 shown in fig. 2.

In the back-surface illumination type CAPD sensor having such a configuration, a sufficient aperture ratio can be secured as compared with the case of the front-surface illumination type. Therefore, Quantum Efficiency (QE) × aperture ratio (FF) can be maximized, and distance measurement characteristics can be improved.

That is, for example, as shown by an arrow a28 or an arrow a29, light obliquely incident toward the PD 125 at an angle may be incident on the PD 125 without being shielded by the wiring. Similarly, as shown by an arrow a30, light incident perpendicularly to the PD 125 is also incident on the PD 125 without being blocked by a wiring or the like.

As described above, in the back-surface illumination type CAPD sensor, light incident at a certain angle and light incident perpendicularly to the PD 125 can be received, and light reflected by wiring or the like connected to the signal extraction unit (tap) in the front-surface illumination type can be received. Therefore, by receiving more light, the sensitivity of the pixel can be improved. In other words, Quantum Efficiency (QE) × aperture ratio (FF) can be maximized, and as a result, distance measurement characteristics can be improved.

In particular, in the case where the tap is provided near the center of the pixel rather than at the outer edge of the pixel, in the front surface illumination type CAPD sensor, it is difficult to secure a sufficient aperture ratio, and the sensitivity of the pixel is lowered. However, in the light receiving element 1 as the back surface illumination type CAPD sensor, a sufficient aperture ratio can be secured regardless of the arrangement position of the tap, and the sensitivity of the pixel can be improved.

Further, in the back-surface illumination type light receiving element 1, since the signal extraction unit is formed in the vicinity of the surface on the opposite side of the light incident surface (on which the infrared light from the outside is incident) in the semiconductor substrate 61, the photoelectric conversion of the infrared light occurring in the region of the invalid tap can be reduced. Therefore, Cmod, i.e., charge separation efficiency, can be improved.

Fig. 6 illustrates a cross-sectional view of a pixel of a front-surface illumination type and a back-surface illumination type CAPD sensor.

In the front surface illumination type CAPD sensor on the left side of fig. 6, in the figure, the upper side of the semiconductor substrate 141 is a light incident surface, and a wiring layer 152 including a plurality of layers of wirings, an inter-pixel light shielding portion 153, and an on-chip lens 154 are stacked on the light incident surface side of the semiconductor substrate 141.

In the back surface illumination type CAPD sensor on the right side of fig. 6, in the figure, a wiring layer 152 including a plurality of layers of wirings is formed on the lower side of the semiconductor substrate 142 as the side opposite to the light incident surface, and the inter-pixel light shielding portion 153 and the on-chip lens 154 are stacked on the upper side of the semiconductor substrate 142 as the light incident surface side.

Note that in fig. 6, a gray trapezoid shape shows an area where the intensity of light generated by condensing infrared light by the on-chip lens 154 is strong.

For example, in the front surface illumination type CAPD sensor, the following region R11 exists: among them, the invalid tap and the valid tap exist on the light incident surface side of the semiconductor substrate 141. Therefore, in the case where many components are directly incident on the invalid tap and photoelectric conversion is performed in the region of the invalid tap, signal carriers obtained by the photoelectric conversion are not detected in the N + semiconductor region of the valid tap.

In the front surface illumination type CAPD sensor, since the intensity of infrared light is strong in the region R11 near the light incident surface of the semiconductor substrate 141, the probability of performing photoelectric conversion of infrared light in the region R11 increases. That is, since the amount of light of infrared light incident in the vicinity of the invalid tap is large, the number of signal carriers that the valid tap cannot detect increases, and the charge separation efficiency decreases.

On the other hand, in the back-surface illumination type CAPD sensor, the following region R12 exists: among them, the invalid tap and the valid tap exist at a position of the semiconductor substrate 142 away from the light incident surface, that is, at a position near the surface on the side opposite to the light incident surface side. The semiconductor substrate 142 corresponds to the semiconductor substrate 61 shown in fig. 2.

In this example, since the region R12 exists in a part of the surface of the semiconductor substrate 142 opposite to the light incident surface side and the region R12 exists at a position distant from the light incident surface, the intensity of the incident infrared light becomes relatively weak in the vicinity of the region R12.

A signal carrier obtained by photoelectric conversion in a region where the infrared light intensity is strong (for example, near the center of the semiconductor substrate 142 or near the incident surface) is guided to the effective tap by an electric field generated in the semiconductor substrate 142, and is detected in the N + semiconductor region of the effective tap.

On the other hand, since the intensity of the incident infrared light is relatively weak near the region R12 including the invalid tap, the probability of performing photoelectric conversion of the infrared light in the region R12 is low. That is, the amount of light of infrared light incident in the vicinity of the invalid tap is small, the number of signal carriers (electrons) generated by photoelectric conversion in the vicinity of the invalid tap and moving to the N + semiconductor region of the invalid tap is reduced, and the charge separation efficiency can be improved. As a result, the distance measurement characteristic can be enhanced.

In the back-surface illumination type light receiving element 1, since thinning (thinning) of the semiconductor substrate 61 can be achieved, the extraction efficiency of electrons (charges) as signal carriers can be improved.

For example, in a front-surface-illumination CAPD sensor, since an aperture ratio cannot be sufficiently secured, it is desirable that the substrate 171 be thick to some extent in order to secure a higher quantum efficiency and suppress a decrease in the quantum efficiency × aperture ratio, as shown by an arrow W31 in fig. 7.

Therefore, the slope of the potential becomes gentle in a region near the surface opposite to the light incident surface in the substrate 171 (for example, in a region R21 of fig. 7), and the electric field in the direction substantially perpendicular to the substrate 171 becomes weak. In this case, since the moving speed of the signal carrier is delayed, the time required from performing photoelectric conversion to detecting the signal carrier in the N + semiconductor region of the effective tap becomes long. Note that, in fig. 7, arrows in the substrate 171 show an electric field in the substrate 171 in a direction perpendicular to the substrate 171.

Further, in the case where the substrate 171 is thick, the moving distance of the signal carrier from a position in the substrate 171 distant from the effective tap to the N + semiconductor region in the effective tap becomes long. Therefore, at a position distant from the effective tap, the time required from performing photoelectric conversion to detecting a signal carrier in the N + semiconductor region of the effective tap becomes longer.

Fig. 8 illustrates a relationship between a position in the thickness direction of the substrate 171 and the moving speed of signal carriers. The region R21 corresponds to a diffusion current region.

As described above, in the case where the substrate 171 is thickened, for example, when the driving frequency is high, that is, when switching between the activation and deactivation of the tap (signal extraction unit) is performed at high speed, it is difficult to completely attract electrons generated at a position (for example, the region R21) distant from the active tap into the N + semiconductor region of the active tap. That is, in the case where the time period during which the tap is in the active state is short, electrons (electric charges) generated in the region R21 or the like may not be detected in the N + semiconductor region of the active tap, and the extraction efficiency of electrons is lowered.

On the other hand, in the back-surface illumination type CAPD sensor, since a sufficient aperture ratio can be ensured, for example, as shown by an arrow W32 in fig. 7, even if the substrate 172 is thinned, a sufficient quantum efficiency × aperture ratio can be ensured. Here, the substrate 172 corresponds to the semiconductor substrate 61 of fig. 2, and an arrow inside the substrate 172 shows an electric field in a direction perpendicular to the substrate 172.

Fig. 9 illustrates a relationship between a position in the thickness direction of the substrate 172 and the moving speed of signal carriers.

As described above, in the case where the thickness of the substrate 172 in the direction perpendicular to the substrate 172 is reduced, the electric field in the direction substantially perpendicular to the substrate 172 becomes strong, and only electrons (charges) in the drift current region where the moving speed of the signal carriers is fast are used, while electrons in the diffusion current region where the moving speed of the signal carriers is slow are not used. By using only electrons (charges) in the drift current region, the time required to detect signal carriers in the N + semiconductor region of the active tap from performing photoelectric conversion becomes short. Further, when the thickness of the substrate 172 is reduced, the moving distance of the signal carrier to the N + semiconductor region in the effective tap becomes shorter.

Therefore, in the back-surface-illuminated CAPD sensor, even if the driving frequency is high, the signal carriers (electrons) generated in each region in the substrate 172 can be sufficiently attracted to the N + semiconductor region of the effective tap, and the extraction efficiency of electrons can be improved.

Further, by reducing the thickness of the substrate 172, sufficient electron extraction efficiency can be ensured even at a high driving frequency, and high-speed driving durability can be improved.

In particular, in the back surface illumination type CAPD sensor, since a voltage can be directly applied to the substrate 172 (i.e., the semiconductor substrate 61), the response speed of switching between the activation and deactivation of the tap is fast, and driving can be performed at a high driving frequency. Further, since a voltage can be directly applied to the semiconductor substrate 61, a region that can be modulated in the semiconductor substrate 61 is widened.

Further, in the back surface illumination type light receiving element 1(CAPD sensor), since a sufficient aperture ratio can be obtained, the pixel can be miniaturized to a degree corresponding to the amount, and the miniaturization resistance of the pixel can be improved.

In addition, in the light receiving element 1, by making the light receiving element 1a back-side illumination type, it is possible to make the design of the capacity of the back end of line (BEOL) more free, and therefore, the degree of freedom in designing the saturation signal amount (Qs) can be improved.

< sectional views of a plurality of pixels >

Fig. 10 and 11 illustrate cross-sectional views of a state in which a plurality of (three) pixels 51 described above are arranged.

Fig. 10 illustrates a sectional view in the same sectional direction as that of the sectional view of fig. 2 and corresponding to the line a-a 'of fig. 3, and fig. 11 illustrates a sectional view corresponding to the line B-B' of fig. 3.

Since the sectional view of fig. 10 is the same as that of fig. 2, a description thereof will be omitted.

With regard to fig. 11, a portion different from fig. 10 will be explained.

In fig. 11, the pixel transistor Tr is formed in a pixel boundary region of an interface portion between the multilayer wiring layer 91 and the semiconductor substrate 61. The pixel transistor Tr is any of the transfer transistor 101, the switching transistor 104, the reset transistor 105, the amplification transistor 106, and the selection transistor 107 shown in fig. 4.

Further, in addition to the voltage application wiring 93 for applying a predetermined voltage to the P + semiconductor region 73 as a voltage application unit being formed on the metal film M1, the signal extraction wiring 95 connected to a part of the N + semiconductor region 71 as a charge detection unit is also formed on the metal film M1. The signal extraction wiring 95 transfers the charge detected in the N + semiconductor region 71 to the FD 102.

As shown in fig. 11, the voltage application wiring 93 of the metal film M1 is electrically connected to one of the wirings 96-1 and 96-2 of the metal film M4 through a via hole (via). The wiring 96-1 of the metal film M4 is connected to the wiring 97-1 of the metal film M5 at a predetermined position (position not shown in fig. 11) through a via hole, and the wiring 96-2 of the metal film M4 is connected to the wiring 97-2 of the metal film M5 at a predetermined position through a via hole.

A of fig. 12 shows a plan view of the metal film M4, and B of fig. 12 shows a plan view of the metal film M5.

In a and B of fig. 12, the area of the pixel 51 and the areas of the signal extraction units 65-1 and 65-2 having the octagonal shapes shown in fig. 3 are shown by broken lines. In a and B of fig. 12, the vertical direction of the drawing is the vertical direction of the pixel array unit 21, and the horizontal direction of the drawing is the horizontal direction of the pixel array unit 21.

In a predetermined region where the wiring regions overlap, the P + semiconductor region 73 as a voltage applying unit of the signal extracting unit 65-1 is connected to the wiring 96-1 of the metal film M4 through a via hole, and the wiring 96-1 is connected to the wiring 97-1 of the metal film M5 through a via hole or the like, and so on.

Similarly, in a predetermined region where the wiring regions overlap, the P + semiconductor region 73 as a voltage applying unit of the signal extracting unit 65-2 is connected to the wiring 96-2 of the metal film M4 through a via hole, and the wiring 96-2 is connected to the wiring 97-2 of the metal film M5 through a via hole or the like, and so on.

Predetermined voltages (voltages MIX0 or MIX 1) from the driving units of the peripheral circuit unit around the pixel array unit 21 are transmitted to the wirings 97-1 and 97-2 of the metal film M5 and supplied to the wirings 96-1 and 96-2 of the metal film M4. Further, the predetermined voltage is applied from the wirings 96-1 and 96-2 of the metal film M4 to the voltage application wiring 93 of the metal film M1 through the metal films M3 and M2, and is supplied to the P + semiconductor region 73 as a voltage application unit.

<2. necessity of enhancing basic pixel Structure >

The basic structure of the light receiving element of the embodiment to which the present technology is applied has been described above. Hereinafter, with respect to the light receiving element having the above-described basic structure, the configuration of the light receiving element to which the embodiment of the present technology is applied will be explained.

The light receiving element to which the embodiment of the present technology is applied is changed to a light receiving element as follows: among them, a part of the structure of the multilayer wiring layer 91 of the pixel 51 is improved with respect to the basic structure of the light receiving element 1 as described above. Hereinafter, a pixel structure in which the pixel structure is improved by applying an embodiment of the present technology to the pixel 51 of the light receiving element 1 will be described as a pixel 201. Note that portions corresponding to the pixels 51 of fig. 2 are denoted by the same reference numerals, and description thereof will be omitted as appropriate.

As described with reference to fig. 2 and the like, the structure of the pixel 51 of the light receiving element 1 described above employs the following structure: among them, by providing the reflecting member 94 (which is a member that reflects incident light) on the metal film M1 closest to the semiconductor substrate 61 in the multilayer wiring layer 91, light that has passed through the semiconductor substrate 61 as a photoelectric conversion region is reflected toward the semiconductor substrate 61 to improve the efficiency of light contributing to photoelectric conversion.

In the structure of the pixel 51, in the case where the photoelectric conversion efficiency in the vicinity of the charge detection unit is also increased by adding the reflection structure using the reflection member 94, and the electric charge not following the voltage switching is increased, Cmod representing the signal contrast of the CAPD sensor may be reduced, and the improvement effect may be reduced.

Therefore, hereinafter, the following pixel structure is proposed: this pixel structure can improve the distance measurement accuracy by suppressing electric charges detected in the charge detection unit and not following voltage switching in the vicinity of the charge detection unit while maintaining the characteristics of the basic structure of the light receiving element 1.

<3. first embodiment of pixel >

Fig. 13 is a sectional view illustrating a pixel structure of a first embodiment of a pixel to which the present technique is applied.

Fig. 13 illustrates a sectional view in the same sectional direction as that of the pixel 51 illustrated in fig. 2. The same applies to fig. 14 to 27 described later.

The pixel 201 according to the first embodiment of fig. 13 differs from the pixel 51 of fig. 2 in that a reflection suppression film 211 containing polycrystalline silicon is newly formed between the surface-side interface of the semiconductor substrate 61 where the multilayer wiring layer 91 is formed and the wiring layer of the metal film M1. More specifically, the reflection-suppressing film 211 is formed on the lower side of the N + semiconductor region 71 as the charge detection unit in fig. 13, in other words, between the N + semiconductor region 71 and the wiring layer of the metal film M1, and for example, the planar surface region on which the reflection-suppressing film 211 is formed has an octagonal shape similar to the N + semiconductor region 71 as the charge detection unit.

The reflection-suppressing film 211 containing polycrystalline silicon can be formed by the same process as the pixel transistor Tr shown in fig. 11, and the reflection-suppressing film 211 is a gate electrode of the pixel transistor Tr formed in the pixel boundary area.

Since the light attenuates when it penetrates deep, the light reflected from the surface side also has a high intensity. As shown in fig. 14, by forming the reflection suppressing film 211 containing polycrystalline silicon between the surface side interface of the semiconductor substrate 61 and the wiring layer of the metal film M1 as described above, the reflection suppressing film 211 can suppress reflection of light having passed through the semiconductor substrate 61 toward the semiconductor substrate 61 side, and therefore, incident light is reflected by the reflection member 94, and therefore, electric charges (electrons) directly incident on the charge detecting unit can be reduced. As a result, in the vicinity of the charge detecting unit, the charge detected by the charge detecting unit, which does not follow the voltage switching, can be suppressed, and the distance measurement accuracy can be improved.

<4. second embodiment of pixel >

Fig. 15 is a sectional view illustrating a pixel structure of a second embodiment of a pixel to which the present technique is applied.

The pixel 201 according to the second embodiment of fig. 15 differs from the pixel 51 of fig. 2 in that a reflection suppression film 212 using a material other than polysilicon is newly formed between the surface-side interface of the semiconductor substrate 61 where the multilayer wiring layer 91 is formed and the wiring layer of the metal film M1. As long as the material of the reflection suppressing film 212 is a material having a lower reflectance of light than SiO as the interlayer insulating film 922The film of reflectivity of (a) is sufficient, and for example, the film is a nitride film such as SiN or SiCN. Similar to the reflection-suppressing film 211 of the first embodiment, the reflection-suppressing film 212 is formed on the lower side of the N + semiconductor region 71 as the charge detection unit in fig. 15, in other words, between the N + semiconductor region 71 as the charge detection unit and the wiring layer of the metal film M1, and for example, the planar surface region on which the reflection-suppressing film 212 is formed has a shape similar to the N + semiconductor region 71 as the charge detection unitIs octagonal in shape.

Similarly to the first embodiment, as described above, by forming the reflection suppressing film 212 between the surface side interface of the semiconductor substrate 61 and the wiring layer of the metal film M1 using a material other than polysilicon, the reflection suppressing film 211 can suppress reflection of light having passed through the semiconductor substrate 61 toward the semiconductor substrate 61 side, and therefore, incident light is reflected by the reflecting member 94, and therefore, electric charges (electrons) directly incident on the electric charge detecting unit can be reduced. As a result, in the vicinity of the charge detecting unit, the charge detected by the charge detecting unit, which does not follow the voltage switching, can be suppressed, and the distance measurement accuracy can be improved.

<5. third embodiment of pixel >

Fig. 16 is a sectional view illustrating a pixel structure of a third embodiment of a pixel to which the present technology is applied.

The pixel 201 according to the third embodiment of fig. 16 is different in that the reflecting member 94 of the wiring layer of the metal film M1 of the multilayer wiring layer 91 is replaced with a reflecting member 213, as compared with the pixel 51 of fig. 2.

The reflecting member 94 of the pixel 51 of fig. 2 is also formed in the lower region of the N + semiconductor region 71 as the charge detecting unit. However, the reflection member 213 of fig. 16 is different from the reflection member 94 in that the reflection member 213 is not formed in the lower region of the N + semiconductor region 71.

As described above, since reflection of light having passed through the semiconductor substrate 61 toward the semiconductor substrate 61 side can be suppressed by leaving the reflection member 213 formed on the metal film M1 of the multilayer wiring layer 91 not disposed in the lower side region of the N + semiconductor region 71, the incident light is reflected by the reflection member 213, and hence the charge (electrons) directly incident on the charge detection unit can be reduced. As a result, in the vicinity of the charge detecting unit, the charge detected by the charge detecting unit, which does not follow the voltage switching, can be suppressed, and the distance measurement accuracy can be improved.

<6. fourth embodiment of pixel >

Fig. 17 is a sectional view illustrating a pixel structure of a fourth embodiment of a pixel to which the present technology is applied.

The pixel 201 according to the fourth embodiment of fig. 17 has a structure including all the configurations according to the first to third embodiments shown in fig. 13 to 16. That is, the pixel 201 includes the reflection-suppressing film 211 shown in fig. 13, the reflection-suppressing film 212 shown in fig. 15, and the reflection member 213 shown in fig. 16, and the other structure is similar to the pixel 51 of fig. 2.

As described above, since reflection of light that has passed through the semiconductor substrate 61 toward the semiconductor substrate 61 side can be suppressed by including the reflection suppressing film 211, the reflection suppressing film 212, and the reflection member 213 according to the first to third embodiments, electric charges that are directly incident on the electric charge detecting unit (these electric charges correspond to light that has passed through the semiconductor substrate 61) can be further reduced. As a result, in the vicinity of the charge detecting unit, the charge detected by the charge detecting unit, which does not follow the voltage switching, can be suppressed, and the distance measurement accuracy can be improved.

(modification of the fourth embodiment)

Note that in the pixel structure of fig. 17, the positions of the reflection suppressing film 212 and the reflection member 213 of the metal film M1 in the longitudinal direction (substrate depth direction) are different positions.

However, as shown in fig. 18, the positions of the reflection suppressing film 212 and the reflection member 213 of the metal film M1 in the longitudinal direction may be the same position.

Alternatively, as in the pixel structure of fig. 17, the reflection-suppressing film 211, the reflection-suppressing film 212, and the reflection member 213 may be provided at different layer positions, and further, a reflection member for reflecting light may be separately provided at a position above the reflection member 213 which is the same layer as the reflection-suppressing film 212.

<7. fifth embodiment of pixel >

Fig. 19 is a sectional view illustrating a pixel structure of a fifth embodiment of a pixel to which the present technology is applied.

The pixel 201 according to the fifth embodiment of fig. 19 has a structure as follows: among them, a buried insulating film (STI)231 is further added in the structure according to the first embodiment shown in fig. 13.

That is, in the pixel 201 of fig. 19, the buried insulating film 231 is formed between the N + semiconductor region 71 and the P + semiconductor region 73, and is formed in the vicinity of the N + semiconductor region 71. The buried insulating film 231 separates the N + semiconductor region 71 and the P + semiconductor region 73 from each other. Further, the buried insulating film 231 separates the semiconductor substrate 61 including the P-type semiconductor layer and the N + semiconductor region 71 from each other.

As described above, by forming the buried insulating film 231 in the vicinity of the N + semiconductor region 71 and the P + semiconductor region 73, the N + semiconductor region 71 and the P + semiconductor region 73 can be reliably separated from each other. Further, as shown in fig. 20, it is possible to further reduce the electric charges incident on the electric charge detection unit, which are generated by photoelectric conversion of the oblique light or the reflected light of the oblique light. As a result, in the vicinity of the charge detecting unit, the charge detected by the charge detecting unit, which does not follow the voltage switching, can be suppressed, and the distance measurement accuracy can be improved.

Note that the pixel 201 according to the fifth embodiment of fig. 19 has a structure in which the buried insulating film 231 is further added in the structure according to the first embodiment shown in fig. 13, but of course, a structure in which the buried insulating film 231 is further added is also possible in the second to fourth embodiments and the modification of the fourth embodiment described above. Also in this case, in the vicinity of the charge detecting unit, the charge detected by the charge detecting unit without following the voltage switching can be suppressed, and the distance measurement accuracy can be improved.

<8. sixth embodiment of pixel >

Fig. 21 is a sectional view illustrating a pixel structure of a sixth embodiment of a pixel to which the present technology is applied.

The pixel 201 according to the sixth embodiment of fig. 21 has a structure as follows: therein, a buried insulating film (STI)232 is further added in the structure according to the first embodiment shown in fig. 13.

Here, in comparison with the pixel 201 according to the fifth embodiment shown in fig. 19, in the pixel 201 of fig. 19, the buried insulating film 231 is formed between the N + semiconductor region 71 and the P + semiconductor region 73 and in the vicinity of the N + semiconductor region 71, and the buried insulating film 231 is not formed in the vicinity of the interface between the semiconductor substrate 61 in the pixel center portion and the pixel boundary portion.

On the other hand, in the pixel 201 according to the sixth embodiment shown in fig. 21, the buried insulating film 232 is also formed in the vicinity of the interface between the semiconductor substrate 61 in the pixel center portion and the pixel boundary portion. More specifically, the buried insulating film 232 is also formed between the N + semiconductor regions 71-1 and 71-2 in the pixel center portion, between the N + semiconductor region 71-1 in the vicinity of the pixel boundary and the N + semiconductor region 71-2 (not shown) of the right-adjacent pixel 201, and between the N + semiconductor region 71-2 in the vicinity of the pixel boundary and the N + semiconductor region 71-1 (not shown) of the left-adjacent pixel 201. Similarly to the pixel 201 according to the fifth embodiment shown in fig. 19, a buried insulating film 232 is also formed between the N + semiconductor region 71 and the P + semiconductor region 73 and in the vicinity of the N + semiconductor region 71.

As shown in fig. 22, as described above, by forming the buried insulating film 232 in the vicinity of the interface of the semiconductor substrate 61 in the pixel center portion and the pixel boundary portion in addition to forming the buried insulating film 232 in the vicinity of the N + semiconductor region 71 and the P + semiconductor region 73, the reflectance of incident light in the pixel center portion and the pixel boundary portion other than the charge detection unit can be improved. As a result, in the vicinity of the charge detecting unit, the charge detected by the charge detecting unit, which does not follow the voltage switching, can be suppressed, and the distance measurement accuracy can be improved by increasing the charge detected by the effective tap.

Note that the pixel 201 according to the sixth embodiment of fig. 21 has a structure in which the buried insulating film 232 is further added to the structure according to the first embodiment shown in fig. 13, but of course, a structure in which the buried insulating film 232 is further added to the structure in the second to fourth embodiments and the modification of the fourth embodiment described above is also possible. Also in this case, in the vicinity of the charge detecting unit, the charge detected by the charge detecting unit, which does not follow the voltage switching, can be suppressed, and the distance measurement accuracy can be improved by increasing the charge detected by the effective tap.

<9. seventh embodiment of pixel >

Fig. 23 is a sectional view illustrating a pixel structure of a seventh embodiment of a pixel to which the present technology is applied.

The pixel 201 according to the seventh embodiment in fig. 23 has a structure as follows: here, a light shielding film 241 is further added in the buried insulating film 232 of the configuration according to the fifth embodiment shown in fig. 19. Since the light-shielding film 241 is formed in the buried insulating film 232, the light-shielding film 241 is formed between the N + semiconductor region 71 and the P + semiconductor region 73 and in the vicinity of the N + semiconductor region 71. As for the material of the light shielding film 241, for example, a metal material such as tungsten (W) is used, but is not limited thereto as long as the material is a light shielding material.

As shown in fig. 24, as described above, by further providing the light-shielding film 241 in the buried insulating film 231, it is possible to further reduce the electric charges incident on the charge detection unit, which are generated by photoelectric conversion of the oblique light or the reflected light of the oblique light. As a result, in the vicinity of the charge detecting unit, the charge detected by the charge detecting unit, which does not follow the voltage switching, can be suppressed, and the distance measurement accuracy can be improved.

It is to be noted that the pixel 201 according to the seventh embodiment of fig. 23 has a structure in which the buried insulating film 231 and the light shielding film 241 are further added in the configuration according to the first embodiment shown in fig. 13, but of course, a structure in which the buried insulating film 231 and the light shielding film 241 of fig. 23 are further added in the second to fourth embodiments and the modification of the fourth embodiment described above is also possible. Also in this case, in the vicinity of the charge detecting unit, the charge detected by the charge detecting unit without following the voltage switching can be suppressed, and the distance measurement accuracy can be improved.

<10. eighth embodiment of pixel >

Fig. 25 is a sectional view illustrating a pixel structure of an eighth embodiment of a pixel to which the present technology is applied.

The pixel 201 according to the eighth embodiment shown in fig. 25 has a structure as follows: in which a light shielding film 241 of the seventh embodiment shown in fig. 23 is further added to the buried insulating film 232 of the configuration according to the sixth embodiment shown in fig. 21.

Similarly to the sixth embodiment, as described above, by forming the buried insulating film 232 in the vicinity of the interface of the semiconductor substrate 61 of the pixel center portion and the pixel boundary portion in addition to forming the buried insulating film 232 in the vicinity of the N + semiconductor region 71 and the vicinity of the P + semiconductor region 73, the reflectance of incident light in the pixel center portion and the pixel boundary portion other than the charge detection unit can be improved. As a result, in the vicinity of the charge detecting unit, the charge detected by the charge detecting unit, which does not follow the voltage switching, can be suppressed, and the distance measurement accuracy can be improved by increasing the charge detected by the effective tap.

Further, similarly to the seventh embodiment, by further providing the light shielding film 241 in the buried insulating film 232, it is possible to further reduce the electric charges incident on the charge detecting unit, which are generated by photoelectric conversion of the oblique light or the reflected light of the oblique light. As a result, in the vicinity of the charge detecting unit, the charge detected by the charge detecting unit, which does not follow the voltage switching, can be suppressed, and the distance measurement accuracy can be improved.

It is to be noted that the pixel 201 according to the eighth embodiment of fig. 25 has a structure in which the buried insulating film 232 and the light shielding film 241 are further added in the configuration according to the first embodiment shown in fig. 13, but of course, a structure in which the buried insulating film 232 and the light shielding film 241 of fig. 25 are further added in the second to fourth embodiments and the modification of the fourth embodiment described above is also possible. Also in this case, in the vicinity of the charge detecting unit, the charge detected by the charge detecting unit without following the voltage switching can be suppressed, and the distance measurement accuracy can be improved.

<11. ninth embodiment of pixel >

Fig. 26 is a sectional view illustrating a pixel structure of a ninth embodiment of a pixel to which the present technology is applied.

The pixel 201 according to the ninth embodiment of fig. 26 differs from the configuration according to the eighth embodiment shown in fig. 25 only in the structure of the light shielding film 241.

Specifically, in the pixel 201 according to the eighth embodiment of fig. 25, the light shielding film 241 is formed only in the vicinity (side portion) of the N + semiconductor region 71 and the vicinity (side portion) of the P + semiconductor region 73 in the buried insulating film 232.

On the other hand, in the pixel 201 according to the ninth embodiment of fig. 26, in addition to forming the light shielding film 241 in the vicinity (side portion) of the N + semiconductor region 71 and the vicinity (side portion) of the P + semiconductor region 73 in the buried insulating film 232, the light shielding film 241 is formed in the vicinity of the upper surface in the buried insulating film 232 at the pixel center portion and the pixel boundary portion. More specifically, the light shielding film 241 is also formed near the upper surface at three positions: the inside of the buried insulating film 232 between the N + semiconductor regions 71-1 and 71-2 in the center portion of the pixel, the inside of the buried insulating film 232 between the N + semiconductor region 71-1 in the vicinity of the pixel boundary and the N + semiconductor region 71-2 (not shown) of the right-adjacent pixel 201, and the inside of the buried insulating film 232 between the N + semiconductor region 71-2 in the vicinity of the pixel boundary and the N + semiconductor region 71-1 (not shown) of the left-adjacent pixel 201.

As described above, the light shielding film 241 may be formed in the region in the planar direction and in the vicinity (side portion) of the N + semiconductor region 71 or the vicinity (side portion) of the P + semiconductor region 73 with respect to the wide portion of the formation region of the embedded insulating film 232.

Further, as shown in fig. 27, the light shielding film 241 may be formed so that a region between two N + semiconductor regions 71 adjacent to the pixel center portion and a region buried into the buried insulating film 232 from the substrate interface at the pixel boundary portion by a predetermined depth, and so on for the vicinity (side portion) of the N + semiconductor region 71 or the P + semiconductor region 73 and the vicinity of the upper surface of the buried insulating film 232.

<12. summary >

The pixel 201 according to the first to ninth embodiments described above has a reflection suppressing structure that suppresses light reflection in a planar region corresponding to the first charge detecting unit (for example, the N + semiconductor region 71-1) and the second charge detecting unit (for example, the N + semiconductor region 71-2) in the multilayer wiring layer 91.

The reflection suppressing structures are, for example, a reflection suppressing film 211 containing polysilicon in the first embodiment of fig. 13 and a reflection suppressing film 212 containing a nitride film in the second embodiment of fig. 15. Further, in the third embodiment of fig. 16, the reflection suppressing structure is the reflection member 213 formed not to be provided in the lower region of the N + semiconductor region 71, and in the fourth embodiment of fig. 17, the reflection suppressing structure is a structure in which the reflection suppressing film 211 and the reflection suppressing film 212 are stacked in the stacking direction of the multilayer wiring layer 91.

Since the pixel 201 includes the reflection suppressing structure, light having passed through the semiconductor substrate 61 is reflected toward the semiconductor substrate 61. Therefore, the charge directly incident on the charge detecting unit can be reduced. As a result, in the vicinity of the charge detecting unit, the charge detected by the charge detecting unit, which does not follow the voltage switching, can be suppressed, and the distance measurement accuracy can be improved.

<13. construction example of ranging module >

Fig. 28 is a block diagram illustrating a configuration example of a measurement module that outputs distance measurement information using the light receiving element 1 including any pixel structure among the first to ninth embodiments.

The measurement module 500 includes a light emitting unit 511, a light emission control unit 512, and a light receiving unit 513.

The light emitting unit 511 has a light source that emits light of a predetermined wavelength, and the light emitting unit 511 irradiates the object with irradiation light whose luminance periodically changes. For example, the light emitting unit 511 has a light emitting diode as a light source that emits infrared light having a wavelength in the range of 780nm to 1000nm, and the light emitting unit 511 generates irradiation light in synchronization with the light emission control signal CLKp of a rectangular wave supplied from the light emission control unit 512.

It is to be noted that the light emission control signal CLKp is not limited to a rectangular wave as long as the control signal CLKp is a periodic signal. For example, the light emission control signal CLKp may be a sine wave.

The light emission control unit 512 supplies a light emission control signal CLKp to the light emitting unit 511 and the light receiving unit 513, and the light emission control unit 512 controls the irradiation timing of the irradiation light. The frequency of the light emission control signal CLKp is, for example, 20 megahertz (MHz). Note that the frequency of the light emission control signal CLKp is not limited to 20 megahertz (MHz), and may be 5 megahertz (MHz) or the like.

The light receiving unit 513 receives reflected light reflected from the object, calculates distance information of each pixel from the light reception result, generates a depth image in which the distance to the object is represented by a gray value of each pixel, and outputs the depth image.

A light receiving element 1 including any of the pixel structures in the first to ninth embodiments is used for the light receiving unit 513. For example, the light receiving element 1 serving as the light receiving unit 513 calculates the distance information of each pixel from the signal intensity detected by each charge detecting unit (N + semiconductor region 71) of each of the signal extracting units 65-1 and 65-2 of each pixel 201 of the pixel array unit 21 based on the light emission control signal CLKp.

As described above, the light receiving element 1 including any pixel structure among the first to ninth embodiments can be incorporated as the light receiving unit 513 of the ranging module 500, and the light receiving unit 513 obtains the distance information to the object by the indirect ToF method and outputs the distance information. Therefore, as the measurement module 500, the distance measurement characteristic can be improved.

As described above, according to the embodiments of the present technology, by configuring the CAPD sensor as the back surface illumination type light receiving element, the distance measurement characteristic can be improved.

It is to be noted that the light receiving element 1 can be applied to various electronic apparatuses, for example, an image pickup apparatus such as a digital still camera or a digital video camera having a distance measurement function, and a mobile phone having a distance measurement function, in addition to the above-described distance measurement module.

Of course, in the present technology, a combination of the above embodiments is also possible depending on the situation. That is, for example, the number or position of the signal extraction units provided in the pixel, the shape of the signal extraction units, or whether the signal extraction units are caused to have a common structure, whether or not there is an on-chip lens, whether or not there is an interpixel light-shielding portion, whether or not there is a separation region, the thickness of the on-chip lens or substrate, the type or film design of the substrate, whether or not there is an offset on the incident surface, whether or not there is a reflection member, and the like may be selected, as appropriate, in accordance with the characteristics such as pixel sensitivity and the like that are prioritized.

Further, in the above description, an example of using electrons as signal carriers has been described. However, holes generated by photoelectric conversion may also be used as signal carriers. In this case, it is sufficient that the charge detection unit for detecting signal carriers includes a P + semiconductor region, the voltage application unit for generating an electric field in the substrate includes an N + semiconductor region, and holes as signal carriers are detected in the charge detection unit provided in the signal extraction unit.

<14. application example of endoscopic surgery System >

The technique of the present invention (present technique) can be applied to various products. For example, the techniques of the present invention may be applied to an endoscopic surgical system.

Fig. 29 is a view depicting a schematic configuration example of an endoscopic surgery system to which the technique according to the embodiment of the present invention (present technique) can be applied.

In fig. 29, a state in which a surgeon (doctor) 11131 is performing an operation on a patient 11132 on a bed 11133 using an endoscopic surgery system 11000 is illustrated. As shown, the endoscopic surgery system 11000 includes an endoscope 11100, other surgical tools 11110 such as a veress tube 11111 and an energy device 11112, a support arm device 11120 that supports the endoscope 11100, and a cart 11200 on which various devices for endoscopic surgery are mounted.

The endoscope 11100 includes a lens barrel 11101 having a region of a predetermined length from a distal end thereof to be inserted into a body cavity of a patient 11132, and a camera 11102 connected to a proximal end of the lens barrel 11101. In the example shown, endoscope 11100 is depicted as comprising a rigid endoscope having a rigid lens barrel 11101. However, an endoscope 11100 which is a flexible endoscope having a soft type lens barrel 11101 may also be included.

The lens barrel 11101 has an opening at its distal end to mount an objective lens. The light source device 11203 is connected to the endoscope 11100 so that light generated by the light source device 11203 is introduced into the distal end of the lens barrel 11101 through a light guide extending inside the lens barrel 11101 and irradiated toward an observation target in a body cavity of the patient 11132 through an objective lens. It is noted that endoscope 11100 can be a direct-view endoscope, or can be a strabismus endoscope or a side-view endoscope.

An optical system and an image pickup element are provided inside the camera 11102 so that reflected light (observation light) from an observation target is condensed on the image pickup element through the optical system. The image pickup element photoelectrically converts observation light to generate an electric signal corresponding to the observation light, that is, an image signal corresponding to an observation image. The image signal is transmitted to the CCU11201 as RAW data.

The CCU11201 includes a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), or the like, and the CCU11201 integrally controls the operation of the endoscope 11100 and the display device 11202. Further, the CCU11201 receives an image signal from the camera 11102, and the CCU11201 performs various image processing for displaying an image based on the image signal, such as development processing (demosaicing processing), on the image signal.

Under the control of the CCU11201, the display device 11202 displays an image on the display device based on an image signal on which image processing has been performed by the CCU 11201.

The light source device 11203 includes a light source such as a Light Emitting Diode (LED), and supplies irradiation light when imaging the operation region to the endoscope 11100.

The input device 11204 is an input interface of the endoscopic surgical system 11000. The user can perform input of various information or instructions input to the endoscopic surgical system 11000 through the input device 11204. For example, the user will input an instruction or the like to change the imaging conditions (irradiation light type, magnification, focal length, or the like) by the endoscope 11100.

The treatment tool control device 11205 controls driving of the energy device 11112 for cauterization or incision of tissue, or closure of blood vessels, or the like. The pneumoperitoneum device 11206 supplies gas into the body cavity of the patient 11132 through the pneumoperitoneum tube 11111 to inflate the body cavity, so as to ensure the field of view of the endoscope 11100 and ensure the working space of the surgeon. The recorder 11207 is a device capable of recording various information relating to the operation. The printer 11208 is a device capable of printing various information related to the operation in various forms such as text, images, or graphics.

It is to be noted that the light source device 11203 that supplies illumination light when imaging the surgical field to the endoscope 11100 may include a white light source including, for example, an LED, a laser light source, or a combination thereof. In the case where the white light source includes a combination of red, green, and blue (RGB) laser light sources, since the output intensity and the output timing can be controlled with high accuracy for each color (each wavelength), the light source device 11203 can adjust the white balance of the captured image. Further, in this case, if the observation target is irradiated with the laser beams from the respective RGB laser light sources in a time-division manner, and the driving of the image pickup element of the camera 11102 is controlled in synchronization with the irradiation timing, images corresponding to R, G and B colors, respectively, may also be picked up in a time-division manner. According to this method, a color image can be obtained even if a color filter is not provided for the image pickup element.

Further, the light source device 11203 may be controlled such that the intensity of light to be output is changed for each predetermined time. By controlling the driving of the image pickup element of the camera 11102 in synchronization with the timing of the light intensity change so as to acquire images in a time-division manner and synthesize the images, a high dynamic range image free of underexposed blocking shadows (blocked up shadows) and overexposed bright spots can be produced.

Further, the light source device 11203 may be configured to supply light of a predetermined wavelength band for special light observation. In the special light observation, for example, by irradiating narrow-band light with wavelength dependence of light absorption in human tissue, narrow-band observation (narrow-band imaging) for imaging predetermined tissue such as blood vessels of a surface portion of a mucous membrane with high contrast can be performed, compared with irradiation light (i.e., white light) at the time of ordinary observation. Alternatively, in the special light observation, fluorescence observation for obtaining an image from fluorescence generated by irradiation of excitation light may be performed. In the fluorescence observation, fluorescence from the human tissue may be observed by irradiating excitation light onto the human tissue (autofluorescence observation), or a fluorescence image may be obtained by locally injecting an agent such as indocyanine green (ICG) into the human tissue and irradiating excitation light corresponding to the fluorescence wavelength of the agent onto the human tissue. The light source arrangement 11203 may be configured to supply such narrow band light and/or excitation light suitable for special light viewing as described above.

Fig. 30 is a block diagram depicting a functional configuration example of the camera 11102 and the CCU11201 shown in fig. 29.

The camera 11102 includes a lens unit 11401, an image pickup unit 11402, a drive unit 11403, a communication unit 11404, and a camera control unit 11405. The CCU11201 includes a communication unit 11411, an image processing unit 11412, and a control unit 11413. The camera 11102 and the CCU11201 are connected to communicate with each other by a transmission cable 11400.

The lens unit 11401 is an optical system provided at a connection position with the lens barrel 11101. Observation light entering from the distal end of the lens barrel 11101 is guided to the camera 11102 and introduced into the lens unit 11401. The lens unit 11401 includes a combination of a plurality of lenses including a zoom lens and a focus lens.

The number of image pickup elements included in the image pickup unit 11402 may be one (single plate type) or plural (multiple plate type). In the case where the image pickup unit 11402 is configured as a multi-plate type image pickup unit, for example, the image pickup element may generate image signals corresponding to the respective R, G and B and may synthesize the image signals to obtain a color image. The image pickup unit 11402 may also be configured to have a pair of image pickup elements to acquire respective image signals for the left eye and image signals for the right eye for three-dimensional (3D) display. If a 3D display is performed, the surgeon 11131 can more accurately understand the depth of the living tissue in the surgical field. Note that in the case where the image pickup unit 11402 is configured as a stereoscopic image pickup unit, a plurality of systems of the lens unit 11401 are provided corresponding to the respective image pickup elements.

Further, the image pickup unit 11402 may not necessarily be provided on the camera 11102. For example, the image pickup unit 11402 may be disposed right behind the objective lens inside the lens barrel 11101.

The driving unit 11403 includes an actuator, and the driving unit 11403 moves the zoom lens and the focus lens of the lens unit 11401 by a predetermined distance along the optical axis under the control of the camera control unit 11405. Therefore, the magnification and focus of the image captured by the image capturing unit 11402 can be appropriately adjusted.

Communication unit 11404 includes communication devices to receive and transmit various information from CCU11201 to CCU 11201. The communication unit 11404 transmits the image signal acquired from the image pickup unit 11402 to the CCU11201 as RAW data via the transmission cable 11400.

Further, the communication unit 11404 receives a control signal for controlling driving of the camera 11102 from the CCU11201, and supplies the control signal to the camera control unit 11405. The control signal includes information related to the image capturing conditions, such as information specifying a frame rate of a captured image, information specifying an exposure value at the time of capturing an image, and/or information specifying a magnification and a focus of a captured image.

Note that image capturing conditions such as a frame rate, an exposure value, magnification, or focus may be designated by a user, or may be automatically set by the control unit 11413 of the CCU11201 based on the acquired image signal. In the latter case, an Auto Exposure (AE) function, an Auto Focus (AF) function, and an Auto White Balance (AWB) function are included in the endoscope 11100.

The camera control unit 11405 controls driving of the camera 11102 based on a control signal from the CCU11201 received through the communication unit 11404.

The communication unit 11411 includes a communication device for receiving and transmitting various information from the camera 11102 to the camera 11102. The communication unit 11411 receives an image signal transmitted from the camera 11102 to the communication unit 11411 through the transmission cable 11400.

Further, the communication unit 11411 transmits a control signal for controlling driving of the camera 11102 to the camera 11102. The image signal and the control signal may be transmitted by electrical communication, optical communication, or the like.

The image processing unit 11412 performs various image processes on the image signal in the form of RAW data transmitted thereto from the camera 11102.

The control unit 11413 executes various controls related to image capturing of an operation region or the like by the endoscope 11100 and display of a captured image obtained by image capturing of the operation region or the like. For example, the control unit 11413 generates a control signal for controlling driving of the camera 11102.

Further, based on the image signal on which the image processing has been performed by the image processing unit 11412, the control unit 11413 controls the display device 11202 to display a captured image that images the surgical region or the like. Thereupon, the control unit 11413 may identify various objects in the captured image using various image recognition techniques. For example, the control unit 11413 may recognize a surgical tool such as forceps, a specific living body region, bleeding, mist when the energy device 11112 is used, and the like by detecting the shape, color, and the like of the edge of an object included in a captured image. When controlling the display device 11202 to display the photographed image, the control unit 11413 may cause various kinds of operation support information to be displayed in a manner of being overlapped with the image of the operation region using the recognition result. In the case where the operation support information is displayed and presented to the surgeon 11131 in an overlapping manner, the burden on the surgeon 11131 can be reduced, and the surgeon 11131 can surely perform the operation.

The transmission cable 11400 connecting the camera 11102 and the CCU11201 to each other is an electrical signal cable for electrical signal communication, an optical fiber for optical communication, or a composite cable for both electrical communication and optical communication.

Here, although communication is performed by wired communication using the transmission cable 11400 in the illustrated example, communication between the camera 11102 and the CCU11201 may also be performed by wireless communication.

As described above, an example of an endoscopic surgical system to which the technique according to the present invention can be applied has been described. The technique according to the present invention can be applied to the image pickup unit 11402 in the above-described configuration. Specifically, the light receiving element 1 having the pixel 201 can be used as part of the configuration of the image pickup unit 11402. By applying the technique according to the embodiment of the present invention as part of the configuration of the imaging unit 11402, the distance from the surgical site can be measured with high accuracy, and a clearer image of the surgical site can be obtained.

It is noted that although an endoscopic surgical system has been described herein as an example, the technique according to the present invention may also be applied to other aspects, such as a microscopic surgical system or the like.

<15. application example of moving body >

The technique according to the embodiment of the present invention (present technique) is applicable to various products. For example, the technology according to the embodiment of the present invention may be implemented as a device mounted on any type of moving body such as an automobile, an electric automobile, a hybrid electric automobile, a motorcycle, a bicycle, a personal mobile device (personal mobility), an airplane, an unmanned aerial vehicle, a boat, and a robot.

Fig. 31 is a block diagram depicting a schematic configuration example of a vehicle control system as an example of a mobile body control system to which the technique according to the embodiment of the invention can be applied.

The vehicle control system 12000 includes a plurality of electronic control units connected to each other via a communication network 12001. In the example shown in fig. 31, the vehicle control system 12000 includes a drive system control unit 12010, a vehicle body system control unit 12020, an outside-vehicle information detection unit 12030, an inside-vehicle information detection unit 12040, and an integrated control unit 12050. Further, a microcomputer 12051, a sound/image output section 12052, and an in-vehicle network interface (I/F)12053 are illustrated as a functional configuration of the integrated control unit 12050.

The drive system control unit 12010 controls the operations of devices related to the drive system of the vehicle according to various programs. For example, the drive system control unit 12010 functions as a control device of: a driving force generating device that generates a driving force of the vehicle, such as an internal combustion engine or a driving motor; a driving force transmission mechanism that transmits a driving force to a wheel; a steering mechanism that adjusts a steering angle of the vehicle; and a brake device that generates braking force of the vehicle.

The vehicle body system control unit 12020 controls the operations of various devices provided on the vehicle body according to various programs. For example, the vehicle body system control unit 12020 functions as a control device of: a keyless entry system, a smart key system, an automatic window device, or various lights such as a head light, a tail light, a brake light, a turn light, or a fog light. In this case, a radio wave transmitted from the mobile device instead of the key or a signal of various switches may be input to the vehicle body system control unit 12020. The vehicle body system control unit 12020 receives these input radio waves or signals, and controls a door lock device, an automatic window device, a lamp, or the like of the vehicle.

The vehicle exterior information detection unit 12030 detects information on the outside of the vehicle including the vehicle control system 12000. For example, the vehicle exterior information detection means 12030 is connected to the imaging unit 12031. Vehicle exterior information detecting section 12030 causes imaging section 12031 to capture an image of the outside of the vehicle, and vehicle exterior information detecting section 12030 receives the captured image. Based on the received image, the vehicle exterior information detection unit 12030 may perform processing of detecting an object such as a human being, a vehicle, an obstacle, a sign, or a character on a road surface or performing processing of detecting a distance to the object.

The imaging unit 12031 is an optical sensor that receives light and outputs an electric signal according to the amount of received light. The imaging unit 12031 may output an electric signal as an image or may output an electric signal as information on distance measurement. Further, the light received by the image pickup section 12031 may be visible light, or may be invisible light such as infrared light.

The in-vehicle information detection unit 12040 detects information about the interior of the vehicle. For example, the in-vehicle information detection unit 12040 is connected to a driver state detection unit 12041 for detecting the state of the driver. For example, the driver state detection portion 12041 includes a camera that photographs the driver. Based on the detection information input from the driver state detection section 12041, the in-vehicle information detection unit 12040 may calculate the degree of fatigue of the driver or the degree of concentration of the driver, or may determine whether the driver is dozing.

The microcomputer 12051 can calculate a control target value of the driving force generation device, the steering mechanism, or the brake device based on information about the interior or exterior of the vehicle obtained by the in-vehicle information detection unit 12040 or the out-of-vehicle information detection unit 12030, and the microcomputer 12051 can output a control command to the drive system control unit 12010. For example, the microcomputer 12051 may execute cooperative control aimed at realizing functions of an Advanced Driver Assistance System (ADAS) including vehicle collision avoidance or vehicle impact mitigation, following travel based on an inter-vehicle distance, vehicle speed keeping travel, collision warning of the vehicle, lane departure warning of the vehicle, or the like.

Further, based on the information on the outside or inside of the vehicle obtained by the outside-vehicle information detecting unit 12030 or the inside-vehicle information detecting unit 12040, the microcomputer 12051 can execute cooperative control of automatic driving intended to cause the vehicle to autonomously run without depending on the operation of the driver or the like by controlling the driving force generating device, the steering mechanism, the braking device, or the like.

Further, the microcomputer 12051 can output a control command to the vehicle body system control unit 12020 based on the information on the outside of the vehicle obtained by the vehicle-exterior information detecting unit 12030. For example, the microcomputer 12051 may perform cooperative control aimed at preventing glare by controlling headlights to change a high beam to a low beam according to the position of a preceding vehicle or an oncoming vehicle detected by the vehicle exterior information detecting unit 12030.

The sound/image output portion 12052 transmits an output signal of at least one of sound and image to an output device capable of visually or aurally notifying an occupant on the vehicle or the outside of the vehicle. In the example of fig. 31, an audio speaker 12061, a display portion 12062, and a dashboard 12063 are illustrated as output devices. For example, the display portion 12062 may include at least one of an in-vehicle display and a flat display.

Fig. 32 is a diagram depicting an example of the mounting position of the imaging section 12031.

In fig. 32, a vehicle 12100 includes image pickup portions 12101, 12102, 12103, 12104, and 12105 as the image pickup portion 12031.

For example, image pickup portions 12101, 12102, 12103, 12104, and 12105 are provided at positions on a front nose, side mirrors, a rear bumper, and a trunk door of the vehicle 12100 and at a position on an upper portion of a windshield inside the vehicle. The camera portion 12101 provided on the nose and the camera portion 12105 provided on the upper portion of the windshield inside the vehicle mainly obtain an image in front of the vehicle 12100. The image pickup portions 12102 and 12103 provided on the side mirrors mainly obtain images of both sides of the vehicle 12100. An image pickup unit 12104 provided on a rear bumper or a trunk door mainly obtains an image behind the vehicle 12100. The front images obtained by the imaging units 12101 and 12105 are mainly used to detect a front vehicle, a pedestrian, an obstacle, a signal, a traffic sign, a lane, or the like.

Incidentally, fig. 32 depicts an example of the shooting ranges of the image pickup sections 12101 to 12104. The imaging range 12111 indicates the imaging range of the imaging unit 12101 provided on the nose. Imaging ranges 12112 and 12113 indicate imaging ranges of the imaging portions 12102 and 12103 provided on the side view mirror, respectively. The imaging range 12114 indicates the imaging range of the imaging unit 12104 provided on the rear bumper or the trunk door. For example, by superimposing the image data captured by the image capturing units 12101 to 12104, a bird's eye view image of the vehicle 12100 as viewed from above can be obtained.

At least one of the imaging units 12101 to 12104 may have a function of obtaining distance information. For example, at least one of the image pickup units 12101 to 12104 may be a stereo camera composed of a plurality of image pickup elements, or may be an image pickup element having pixels for phase difference detection.

For example, the microcomputer 12051 can determine the distance from each solid object within the imaging ranges 12111 to 12114 and the temporal change of the distance (relative speed with respect to the vehicle 12100) based on the distance information obtained from the imaging sections 12101 to 12104, and thereby extract the following nearest solid object as the preceding vehicle in particular: the solid exists on a traveling road of the vehicle 12100 and travels at a predetermined speed (e.g., equal to or greater than 0km/h) in substantially the same direction as the vehicle 12100. The microcomputer 12051 can set a distance between the vehicle and the vehicle ahead of the preceding vehicle in advance, and can execute automatic braking control (including follow-up stop control) and automatic acceleration control (including follow-up start control). Therefore, it is possible to perform cooperative control of automatic driving intended to autonomously run the vehicle without depending on an operation of the driver or the like.

For example, the microcomputer 12051 can classify three-dimensional object data relating to a three-dimensional object into three-dimensional object data of two-wheeled vehicles, standard-sized vehicles, large-sized vehicles, pedestrians, utility poles, and other three-dimensional objects based on distance information obtained from the image pickup units 12101 to 12104, extract the classified three-dimensional object data, and automatically avoid an obstacle using the extracted three-dimensional object data. For example, the microcomputer 12051 recognizes the obstacles around the vehicle 12100 as obstacles that the driver of the vehicle 12100 can visually recognize and obstacles that the driver of the vehicle 12100 cannot visually recognize. Then, the microcomputer 12051 determines a collision risk indicating the risk of collision with each obstacle. In the case where the collision risk is the set value or more and thus there is a possibility of a collision, the microcomputer 12051 outputs a warning to the driver via the audio speaker 12061 or the display portion 12062, and performs forced deceleration or avoidance steering via the drive system control unit 12010. Therefore, the microcomputer 12051 can assist driving to avoid a collision.

At least one of the imaging units 12101 to 12104 may be an infrared camera that detects infrared rays. For example, the microcomputer 12051 can recognize a pedestrian by determining whether or not a pedestrian is present in the images captured by the image capturing sections 12101 to 12104. For example, such identification of a pedestrian is performed by the following process: feature points in a captured image of the imaging sections 12101 to 12104, which are infrared cameras, are extracted, and it is determined whether or not the object is a pedestrian by performing pattern matching processing on a series of feature points representing the outline of the object. When the microcomputer 12051 determines that a pedestrian is present in the captured images of the image capturing sections 12101 to 12104 and thereby recognizes a pedestrian, the sound/image output section 12052 controls the display section 12062 so that the forced-call square contour line is displayed in a manner superimposed on the recognized pedestrian. The sound/image output portion 12052 may also control the display portion 12062 so that an icon or the like representing a pedestrian is displayed at a desired position.

The example of the vehicle control system to which the technique according to the embodiment of the invention can be applied has been described above. The technique according to the embodiment of the invention can be applied to the vehicle exterior information detection unit 12030 or the imaging unit 12031 among the above-described configurations. Specifically, the light receiving element 1 having the pixels 201 can be applied to a distance detection processing block of the vehicle exterior information detection unit 12030 or the imaging unit 12031. By applying the technique according to the embodiment of the invention to the vehicle exterior information detecting unit 12030 or the imaging unit 12031, the distance from an object such as a person, an automobile, an obstacle, a sign, or a letter on a road surface can be measured with high accuracy, fatigue of the driver can be reduced by using the obtained distance information, and safety of the driver or the vehicle can be improved.

The embodiments of the present technology are not limited to the above-described embodiments, and various changes may be made within a scope not departing from the gist of the present technology.

It is to be noted that the effects described in this specification are merely examples, and the effects are not limited to the effects described in this specification. There may be effects other than those described in the present specification.

Further, the present technology may also be configured as follows.

(1) A light receiving element comprising:

an on-chip lens;

a wiring layer; and

a semiconductor layer disposed between the on-chip lens and the wiring layer,

wherein the semiconductor layer includes:

a first voltage applying unit to which a first voltage is applied,

a second voltage applying unit to which a second voltage different from the first voltage is applied,

a first charge detection unit provided in the vicinity of the first voltage application unit, and

a second charge detection unit provided in the vicinity of the second voltage application unit and detecting a charge of the second voltage application unit

The wiring layer includes a reflection suppressing structure that suppresses light reflection in a planar area corresponding to the first charge detecting unit and the second charge detecting unit.

(2) The light receiving element according to (1), wherein the reflection suppressing structure is a film containing polycrystalline silicon.

(3) The light receiving element according to (1), wherein the reflection suppressing structure is a film including a nitride film.

(4) The light receiving element according to (1), wherein the reflection suppressing structure is a structure of: in this structure, a first reflection suppressing film containing polysilicon and a second reflection suppressing film containing a nitride film are stacked in the stacking direction of the wiring layers.

(5) The light-receiving element according to any one of (1) to (4),

wherein the wiring layer includes at least one layer of wiring including a first voltage-applying wiring for supplying the first voltage, a second voltage-applying wiring for supplying the second voltage, and a reflective member, and

the reflection member is not formed in the planar area corresponding to the first charge detection unit and the second charge detection unit.

(6) The light-receiving element according to (5), wherein the one-layer wiring including the first voltage-application wiring, the second voltage-application wiring, and the reflective member is a wiring closest to the semiconductor layer among a plurality of layers of wirings.

(7) The light receiving element according to (5) or (6), wherein the reflecting member is a metal film.

(8) The light-receiving element according to any one of (1) to (7), wherein the semiconductor layer further includes a first buried insulating film that is located between the first voltage application unit and the first charge detection unit and between the second voltage application unit and the second charge detection unit.

(9) The light receiving element according to (8), further comprising:

a light-shielding film within the first buried insulating film.

(10) The light receiving element according to (8) or (9), wherein the semiconductor layer further includes a second buried insulating film, the second buried insulating film being located between the first charge detection unit and the second charge detection unit.

(11) The light receiving element according to (10), further comprising:

a light-shielding film within the second buried insulating film between the first charge detecting unit and the second charge detecting unit.

(12) An electronic device comprising a light receiving element, the light receiving element comprising:

an on-chip lens;

a wiring layer; and

a semiconductor layer disposed between the on-chip lens and the wiring layer,

wherein the semiconductor layer includes:

a first voltage applying unit to which a first voltage is applied,

a second voltage applying unit to which a second voltage different from the first voltage is applied,

a first charge detection unit provided in the vicinity of the first voltage application unit, and

a second charge detection unit provided in the vicinity of the second voltage application unit and detecting a charge of the second voltage application unit

The wiring layer includes a reflection suppressing structure that suppresses light reflection in a planar area corresponding to the first charge detecting unit and the second charge detecting unit.

List of reference numerals

1 light receiving element

21 pixel array unit

51 pixel

61 semiconductor substrate

62 on-chip lens

65-1, 65-2, 65 signal extraction unit

71-1, 71-2, 71N + semiconductor region

73-1, 73-2, 73P + semiconductor region

91 multilayer wiring layer

92 interlayer insulating film

93 voltage applying wiring

94 reflecting member

95 Signal extraction Wiring

96 line

101 pass transistor

102 FD (Floating diffusion)

103 additional capacitor

104 switching transistor

105 reset transistor

106 amplifying transistor

107 select transistor

M1-M5 Metal films

201 pixel

211. 212 reflection suppressing film

213 reflecting member

231. 232 buried insulating film

241 light shielding film

500 ranging module (ranging module)

513 light receiving unit

63页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:定位方法和定位终端

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类