Light receiving element, imaging element, and imaging device
阅读说明:本技术 光接收元件、成像元件和成像装置 (Light receiving element, imaging element, and imaging device ) 是由 佐野拓也 于 2018-01-05 设计创作,主要内容包括:本技术涉及特性改善的光接收元件、成像元件和成像装置。根据本发明的光接收元件包括片上透镜;配线层;和配置在所述片上透镜和所述配线层之间的半导体层。所述半导体层包括被施加第一电压的第一电压施加部,被施加不同于第一电压的第二电压的第二电压施加部,配置在第一电压施加部周围的第一电荷检测部,和配置在第二电压施加部周围的第二电荷检测部。所述配线层包括具有构造成供给第一电压的第一电压施加配线、构造成供给第二电压的第二电压施加配线和反射构件的至少一个层。在平面图中,所述反射构件被设置为与第一电荷检测部或第二电荷检测部重叠。例如,本技术可以适用于构造成测量距离的光接收元件。(The present technology relates to a light receiving element, an imaging element, and an imaging device with improved characteristics. A light receiving element according to the present invention includes an on-chip lens; a wiring layer; and a semiconductor layer disposed between the on-chip lens and the wiring layer. The semiconductor layer includes a first voltage applying portion to which a first voltage is applied, a second voltage applying portion to which a second voltage different from the first voltage is applied, a first charge detecting portion disposed around the first voltage applying portion, and a second charge detecting portion disposed around the second voltage applying portion. The wiring layer includes at least one layer having a first voltage-application wiring configured to supply a first voltage, a second voltage-application wiring configured to supply a second voltage, and a reflective member. The reflecting member is provided so as to overlap with the first charge detecting portion or the second charge detecting portion in a plan view. For example, the present technology may be applied to a light receiving element configured to measure a distance.)
1. A light receiving element comprising:
an on-chip lens;
a multilayer wiring layer;
a semiconductor layer disposed between the on-chip lens and the multilayer wiring layer;
a photoelectric conversion region provided in the semiconductor layer;
a first voltage applying portion configured to apply a first voltage to the semiconductor layer, the first voltage applying portion being provided in the semiconductor layer;
a second voltage applying portion configured to apply a second voltage to the semiconductor layer, the second voltage applying portion being provided in the semiconductor layer;
a first charge detection portion configured to detect carriers generated in the photoelectric conversion region, the first charge detection portion being provided in the semiconductor layer adjacent to the first voltage application portion;
a second charge detection portion configured to detect carriers generated in the photoelectric conversion region, the second charge detection portion being provided in the semiconductor layer adjacent to the second voltage application portion;
a first wiring layer provided in the plurality of wiring layers;
a first wiring connected to the first voltage applying section and provided in the first wiring layer;
a second wiring connected to the second voltage applying section and provided in the first wiring layer; and
and a reflective member provided in the first wiring layer.
2. The light receiving element according to claim 1, wherein the reflecting member at least partially overlaps at least one of the first charge detecting section and the second charge detecting section.
3. The light receiving element according to claim 1, further comprising a plurality of reflecting members provided in the first wiring layer, wherein a first reflecting member included in the plurality of reflecting members at least partially overlaps the first charge detecting section, and wherein a second reflecting member included in the plurality of reflecting members at least partially overlaps the second charge detecting section.
4. The light receiving element according to claim 1, wherein the reflecting member comprises a metal film.
5. The light receiving element according to claim 1, wherein the reflecting member is symmetrically arranged in a region on a side of the first charge detection section and a region on a side of the second charge detection section.
6. The light receiving element according to claim 1, wherein the reflecting member is arranged only in a pixel center region.
7. The light-receiving element according to claim 1, wherein a first voltage-applied portion, a second voltage-applied portion, a first charge-detecting portion, and a second charge-detecting portion are in contact with the multilayer wiring layer.
8. The light receiving element according to claim 1, wherein a first wiring layer having a first wiring, a second wiring, and the reflecting member is included in a layer closest to the semiconductor layer.
9. The light receiving element according to claim 1, wherein the first voltage applying section or the second voltage applying section includes:
a first region containing an acceptor element at a first impurity concentration on the wiring layer side, and
a second region containing an acceptor element at a second impurity concentration lower than the first impurity concentration on the on-chip lens side.
10. The light-receiving element according to claim 9, wherein the first charge detecting section or the second charge detecting section includes:
a third region containing a donor element of a third impurity concentration on the wiring layer side, and
a fourth region containing a donor element of a second impurity concentration lower than the third impurity concentration on the on-chip lens side.
11. A light receiving element comprising:
an on-chip lens;
a multilayer wiring layer;
a semiconductor layer disposed between the on-chip lens and the multilayer wiring layer;
a first voltage applying section configured to apply a first voltage to the semiconductor layer;
a second voltage applying section configured to apply a second voltage to the semiconductor layer;
a first charge detecting section provided adjacent to the first voltage applying section;
a second charge detecting section provided in the semiconductor layer adjacent to the second voltage applying section;
a first wiring layer provided in the plurality of wiring layers;
a first wiring connected to the first voltage applying section and provided in the first wiring layer;
a second wiring connected to the second voltage applying section and provided in the first wiring layer; and
and a reflective member provided in the first wiring layer.
12. The light receiving element according to claim 11, wherein the reflecting member at least partially overlaps at least one of the first charge detecting section and the second charge detecting section.
13. The light receiving element according to claim 11, further comprising a plurality of reflecting members provided in the first wiring layer, wherein a first reflecting member included in the plurality of reflecting members at least partially overlaps the first charge detecting section, and wherein a second reflecting member included in the plurality of reflecting members at least partially overlaps the second charge detecting section.
14. The light receiving element according to claim 11, wherein the reflecting member comprises a metal film.
15. The light receiving element according to claim 11, wherein the reflecting member is symmetrically arranged in a region on a side of the first charge detection section and a region on a side of the second charge detection section.
16. The light receiving element according to claim 11, wherein the reflecting member is arranged only in a pixel center region.
17. The light-receiving element according to claim 11, wherein a first voltage application portion, a second voltage application portion, a first charge detection portion, and a second charge detection portion are in contact with the multilayer wiring layer.
18. The light receiving element according to claim 11, wherein a first wiring layer having a first wiring, a second wiring, and the reflecting member is included in a layer closest to the semiconductor layer.
19. The light receiving element according to claim 11, wherein the first voltage applying section or the second voltage applying section includes:
a first region containing an acceptor element at a first impurity concentration on the wiring layer side, and
a second region containing an acceptor element at a second impurity concentration lower than the first impurity concentration on the on-chip lens side.
20. The light-receiving element according to claim 19, wherein the first charge detecting section or the second charge detecting section includes:
a third region containing a donor element of a third impurity concentration on the wiring layer side, and
a fourth region containing a donor element of a second impurity concentration lower than the third impurity concentration on the on-chip lens side.
21. A light receiving element comprising:
a semiconductor layer having a light receiving face;
a multilayer wiring layer provided on a face opposite to a light receiving face of the semiconductor layer;
a semiconductor layer disposed between the on-chip lens and the multilayer wiring layer;
a first voltage applying section configured to apply a first voltage to the semiconductor layer;
a second voltage applying section configured to apply a second voltage to the semiconductor layer;
a first charge detecting section provided adjacent to the first voltage applying section;
a second charge detecting section provided in the semiconductor layer adjacent to the second voltage applying section;
a first wiring layer provided in the plurality of wiring layers;
a first wiring connected to the first voltage applying section and provided in the first wiring layer;
a second wiring connected to the second voltage applying section and provided in the first wiring layer; and
and a reflective member provided in the first wiring layer.
22. The light-receiving element according to claim 21, wherein the reflecting member at least partially overlaps at least one of the first charge detecting section and the second charge detecting section.
23. The light receiving element according to claim 21, further comprising a plurality of reflecting members provided in the first wiring layer, wherein a first reflecting member included in the plurality of reflecting members at least partially overlaps the first charge detecting section, and wherein a second reflecting member included in the plurality of reflecting members at least partially overlaps the second charge detecting section.
24. The light receiving element according to claim 21, wherein the reflecting member comprises a metal film.
25. The light-receiving element according to claim 21, wherein the reflecting member is symmetrically arranged in a region on a side of the first charge detection section and a region on a side of the second charge detection section.
26. The light receiving element according to claim 21, wherein the reflecting member is arranged only in a pixel center region.
27. The light-receiving element according to claim 21, wherein a first voltage application portion, a second voltage application portion, a first charge detection portion, and a second charge detection portion are in contact with the multilayer wiring layer.
28. The light receiving element according to claim 21, wherein a first wiring layer having a first wiring, a second wiring, and the reflecting member is included in a layer closest to the semiconductor layer.
29. The light-receiving element according to claim 21, wherein the first voltage application part or the second voltage application part comprises:
a first region containing an acceptor element at a first impurity concentration on the wiring layer side, and
a second region containing an acceptor element at a second impurity concentration lower than the first impurity concentration on the on-chip lens side.
30. The light-receiving element according to claim 29, wherein the first charge detecting section or the second charge detecting section comprises:
a third region containing a donor element of a third impurity concentration on the wiring layer side, and
a fourth region containing a donor element of a second impurity concentration lower than the third impurity concentration on the on-chip lens side.
Technical Field
The present technology relates to a light receiving element, an imaging element, and an imaging device, and particularly relates to a light receiving element, an imaging element, and an imaging device, which can improve characteristics.
Background
In the prior art, distance measurement systems using an indirect time-of-flight (ToF) method are known. In such a distance measurement system, a sensor capable of classifying signal charges, which are obtained by receiving active light emitted through a Light Emitting Diode (LED) or a laser using a specific phase and reflected on an object, into different regions at a high speed is absolutely necessary.
Therefore, for example, a technique has been proposed in which a voltage is directly applied to a substrate of a sensor and a current is generated in the substrate, and thus a wide range of regions within the substrate can be modulated at high speed (for example, see patent document 1). Such sensors are also known as Current Assisted Photon Demodulator (CAPD) sensors.
List of cited documents
Patent document
Patent document 1: japanese patent application laid-open No.2011-
Disclosure of Invention
Problems to be solved by the invention
However, in the above-described technique, it is difficult to obtain a CAPD sensor having sufficient characteristics.
For example, the above-described CAPD sensor is a front-illuminated sensor in which wiring and the like are arranged on the surface of the substrate on the side receiving light from the outside.
In order to secure the photoelectric conversion region, it is desirable that no member, such as a wiring, which blocks an optical path of light to be incident is provided on the light receiving surface side of the Photodiode (PD) (i.e., photoelectric conversion unit). However, in the front-side illumination type CAPD sensor, since it is necessary to dispose wiring for charge extraction, various control lines, or signal lines on the light receiving surface side of the PD depending on the configuration, the photoelectric conversion area is limited. That is, it is impossible to secure a sufficient photoelectric conversion region, and there is a case where characteristics such as pixel sensitivity are degraded.
In addition, when considering that the CAPD sensor is used in the presence of external light, the external light component becomes a noise component for the indirect ToF method that measures distance by using active light, and therefore, in order to obtain distance information by ensuring a sufficient signal-to-noise ratio (SN ratio), it is necessary to ensure a sufficient saturation signal amount (Qs). However, in the front-side illumination type CAPD sensor, there is a limitation in wiring layout, and therefore, it is necessary to design a method other than using wiring capacitance, for example, a method of providing an additional transistor for securing capacity.
In the front-illuminated CAPD sensor, a signal extraction unit called a well (Tap) is disposed on the side of the substrate where light enters. On the other hand, in the case where photoelectric conversion in the Si substrate is taken into consideration, there is a difference in attenuation rate at the wavelength of light, but the rate at which photoelectric conversion is performed on the light incident surface side increases. For this reason, in the CAPD sensor of the front type, there is a possibility that: in the well region where the signal extraction portion is provided, the probability of performing photoelectric conversion in the inactivated well region as the well region where the signal charges are not classified increases. In the indirect ToF sensor, distance measurement information is obtained by using signals classified into the respective charge accumulation regions according to the phase of active light, and therefore, a component directly subjected to photoelectric conversion in the inactive well region becomes noise, and as a result, there is a possibility that the distance measurement accuracy deteriorates. That is, there is a possibility that the characteristics of the CAPD sensor are degraded.
The present technology has been made in consideration of such a situation, and aims to improve characteristics.
Means for solving the problems
A light receiving element of a first aspect of the present technology includes:
an on-chip lens;
a wiring layer; and
a semiconductor layer disposed between the on-chip lens and the wiring layer,
wherein the semiconductor layer comprises
A first voltage applying part to which a first voltage is applied,
a second voltage applying section to which a second voltage different from the first voltage is applied,
a first charge detecting section arranged around the first voltage applying section, and
a second charge detecting section disposed around the second voltage applying section,
the wiring layer includes
At least one layer having a first voltage-applying wiring configured to supply a first voltage, a second voltage-applying wiring configured to supply a second voltage, and a reflective member, an
The reflecting member is provided so as to overlap with the first charge detecting portion or the second charge detecting portion in a plan view.
In the first aspect of the present technology, an on-chip lens, a wiring layer, and a semiconductor layer arranged between the on-chip lens and the wiring layer are provided, and a first voltage application section to which a first voltage is applied, a second voltage application section to which a second voltage different from the first voltage is applied, a first charge detection section arranged around the first voltage application section, and a second charge detection section arranged around the second voltage application section are provided in the semiconductor layer. At least one layer having a first voltage application wiring configured to supply a first voltage, a second voltage application wiring configured to supply a second voltage, and a reflective member is provided in the wiring layer, and the reflective member is provided so as to overlap with the first charge detection section or the second charge detection section in a plan view.
An imaging element of a second aspect of the present technology, comprising:
a pixel array section including a plurality of pixels configured to perform photoelectric conversion on incident light,
wherein the pixel comprises
A substrate configured to perform photoelectric conversion on incident light, and
a signal extraction portion including a voltage application portion for generating an electric field by applying a voltage to the substrate and a charge detection portion for detecting a signal carrier generated by photoelectric conversion, the signal extraction portion being provided on a surface of the substrate on a side opposite to an incident surface on which light is incident, within the substrate.
Two signal extraction sections may be formed in the pixel.
A signal extraction section may be formed in the pixel.
Three or more signal extraction portions may be formed in the pixel.
The signal extraction section may be shared between the pixel and another pixel adjacent to the pixel.
The voltage applying part may be shared between the pixel and another pixel adjacent to the pixel.
In the signal extraction section, a P-type semiconductor region as the voltage application section and an N-type semiconductor region as the charge detection section may be provided, the N-type semiconductor region being formed so as to surround the P-type semiconductor region.
In the signal extraction section, an N-type semiconductor region as the charge detection section and a P-type semiconductor region as the voltage application section may be provided, the P-type semiconductor region being formed so as to surround the N-type semiconductor region.
In the signal extraction section, a first N-type semiconductor region and a second N-type semiconductor region as the charge detection section, and a P-type semiconductor region as the voltage application section may be provided, the P-type semiconductor region being formed at a position sandwiched between the first N-type semiconductor region and the second N-type semiconductor region.
In the signal extraction section, a first P-type semiconductor region and a second P-type semiconductor region as the voltage application section, and an N-type semiconductor region as the charge detection section may be provided, the N-type semiconductor region being formed at a position sandwiched between the first P-type semiconductor region and the second P-type semiconductor region.
A voltage may be applied to the incident surface side in the substrate.
In the pixel, a reflecting member configured to reflect light incident on the substrate from the incident surface may be further provided, the reflecting member being formed on a surface of the substrate on a side opposite to the incident surface.
The signal carriers may comprise electrons.
The signal carriers may include holes.
In the pixel, a lens configured to condense light and make the light incident on the substrate may be further provided.
In the pixel, there may be further provided an inter-pixel light-shielding portion configured to shield incident light, the inter-pixel light-shielding portion being formed at a pixel end portion on the incident surface of the substrate.
In the pixel, a pixel separating region configured to penetrate at least a portion of the substrate and block incident light may be further provided, the pixel separating region being formed at a pixel end portion within the substrate.
The substrate may include a P-type semiconductor substrate having a resistance greater than or equal to 500[ omega cm ].
The substrate may include an N-type semiconductor substrate having a resistance of greater than or equal to 500[ omega cm ].
In a second aspect of the present technique,
a pixel array section including a plurality of pixels configured to perform photoelectric conversion on incident light is provided in the imaging element, an
A substrate configured to perform photoelectric conversion on incident light and an extraction portion including a signal extraction portion are provided in the pixel,
the signal extraction portion includes a voltage application portion for generating an electric field by applying a voltage to the substrate and a charge detection portion for detecting signal carriers generated by photoelectric conversion, and is provided on a surface of the substrate on a side opposite to an incident surface on which light is incident within the substrate.
An imaging apparatus of a third aspect of the present technology includes:
a pixel array section including a plurality of pixels configured to perform photoelectric conversion on incident light; and
a signal processing section configured to calculate distance information to an object based on a signal output from the pixel,
wherein the pixel comprises
A substrate configured to perform photoelectric conversion on incident light, and
a signal extraction portion including a voltage application portion for generating an electric field by applying a voltage to the substrate and a charge detection portion for detecting a signal carrier generated by photoelectric conversion, the signal extraction portion being provided on a surface of the substrate on a side opposite to an incident surface on which light is incident, within the substrate.
In a third aspect of the present technique,
a pixel array section including a plurality of pixels configured to perform photoelectric conversion on incident light and a signal processing section configured to calculate distance information to a subject based on signals output from the pixels are provided in the imaging device, an
A substrate configured to perform photoelectric conversion on incident light and an extraction portion including a signal extraction portion are provided in the pixel,
the signal extraction portion includes a voltage application portion for generating an electric field by applying a voltage to the substrate and a charge detection portion for detecting signal carriers generated by photoelectric conversion, and is provided on a surface of the substrate on a side opposite to an incident surface on which light is incident within the substrate.
ADVANTAGEOUS EFFECTS OF INVENTION
According to the first to third aspects of the present technology, characteristics can be improved.
Further, the effects described herein are not necessarily limited, but may include any of the effects described in the present disclosure.
Drawings
Fig. 1 is a diagram illustrating a configuration example of a solid-state imaging element.
Fig. 2 is a diagram showing an example of the pixel configuration.
Fig. 3 is a diagram showing an example of a configuration of a part of a signal extraction unit of a pixel.
Fig. 4 is a graph showing the improvement in sensitivity.
Fig. 5 is a diagram illustrating an improvement in charge separation efficiency.
Fig. 6 is a diagram illustrating improvement in electron extraction efficiency.
Fig. 7 is a diagram illustrating the moving speed of signal carriers in the front surface irradiation type.
Fig. 8 is a diagram illustrating the moving speed of signal carriers in the back surface irradiation type.
Fig. 9 is a diagram showing another configuration example of a part of the signal extraction section of the pixel.
Fig. 10 is a diagram illustrating a relationship between pixels and on-chip lenses.
Fig. 11 is a diagram showing another configuration example of a part of the signal extraction unit of the pixel.
Fig. 12 is a diagram showing another configuration example of a part of the signal extraction unit of the pixel.
Fig. 13 is a diagram showing another configuration example of a part of the signal extraction unit of the pixel.
Fig. 14 is a diagram showing another configuration example of a part of the signal extraction unit of the pixel.
Fig. 15 is a diagram showing another configuration example of a part of the signal extraction unit of the pixel.
Fig. 16 is a diagram showing another configuration example of the pixel.
Fig. 17 is a diagram showing another configuration example of the pixel.
Fig. 18 is a diagram showing another configuration example of the pixel.
Fig. 19 is a diagram showing another configuration example of the pixel.
Fig. 20 is a diagram showing another configuration example of the pixel.
Fig. 21 is a diagram showing another configuration example of the pixel.
Fig. 22 is a diagram showing another configuration example of the pixel.
Fig. 23 is a diagram showing another configuration example of the pixel.
Fig. 24 is a diagram showing another configuration example of the pixel.
Fig. 25 is a diagram showing another configuration example of the pixel.
Fig. 26 is a diagram showing another configuration example of the pixel.
Fig. 27 is a diagram showing another configuration example of the pixel.
Fig. 28 is a diagram showing another configuration example of the pixel.
Fig. 29 is a diagram showing another configuration example of the pixel.
Fig. 30 is a diagram showing another configuration example of the pixel.
Fig. 31 is a diagram showing an equivalent circuit of a pixel.
Fig. 32 is a diagram showing another equivalent circuit of the pixel.
Fig. 33 is a diagram showing an example of the arrangement of voltage supply lines used in the periodic arrangement.
Fig. 34 is a diagram showing an example of the arrangement of voltage supply lines used in the mirror image arrangement.
Fig. 35 is a diagram illustrating characteristics of the periodic configuration and the mirror configuration.
Fig. 36 is a cross-sectional view of a plurality of pixels in a fourteenth embodiment.
Fig. 37 is a cross-sectional view of a plurality of pixels in a fourteenth embodiment.
Fig. 38 is a cross-sectional view of a plurality of pixels in the ninth embodiment.
Fig. 39 is a cross-sectional view of a plurality of pixels in modification 1 of the ninth embodiment.
Fig. 40 is a cross-sectional view of a plurality of pixels in a fifteenth embodiment.
Fig. 41 is a cross-sectional view of a plurality of pixels in the tenth embodiment.
Fig. 42 is a diagram illustrating five metal films of a multilayer wiring layer.
Fig. 43 is a diagram illustrating five metal films of a multilayer wiring layer.
Fig. 44 is a diagram illustrating a polysilicon layer.
Fig. 45 is a diagram showing a modification of the reflective member formed on the metal film.
Fig. 46 is a diagram showing a modification of the reflective member formed on the metal film.
Fig. 47 is a diagram illustrating a substrate configuration of the solid-state imaging element.
Fig. 48 is a block diagram showing an example of the configuration of the distance measuring module.
Fig. 49 is a block diagram showing an example of a schematic configuration of the vehicle control system.
Fig. 50 is a diagram illustrating an example of mounting positions of the vehicle exterior information detecting unit and the imaging portion.
Detailed Description
Hereinafter, embodiments to which the present technology is applicable will be explained with reference to the drawings.
< first embodiment >
< example of construction of solid-state imaging element >
The present technology aims to improve characteristics such as pixel sensitivity by a CAPD sensor having a back-illuminated type configuration.
For example, the present technology can be applied to a solid-state imaging element constituting a distance measurement system that measures a distance by an indirect ToF method, an imaging apparatus including such a solid-state imaging element, and the like.
For example, the distance measuring system is mounted on a vehicle, and may be applied to an in-vehicle system that measures a distance to an object outside the vehicle, a gesture recognition system that measures a distance to an object (e.g., a hand of a user) and recognizes a gesture of the user based on the measurement result, and the like. In this case, the result of the gesture recognition may be used, for example, to operate a car navigation system or the like.
Fig. 1 is a diagram showing an example of the configuration of one embodiment of a solid-state imaging element (light receiving element) to which the present technology is applied.
The solid-
The solid-
A
In the
Here, the row direction means a pixel arrangement direction of a pixel row (i.e., a horizontal direction), and the column direction means a pixel arrangement direction of a pixel column (i.e., a vertical direction). The row direction is the lateral direction in the figure, and the column direction is the longitudinal direction in the figure.
In the
The
Further, in the distance measurement of the indirect ToF method, the number of elements (CAPD elements) of high-speed driving connected to one control line affects controllability of the high-speed driving or driving accuracy. There are many cases where the solid-state imaging element used in the distance measurement of the indirect ToF method is formed as a long pixel array in the horizontal direction. Therefore, in this case, the
Signals output from the respective pixels of the pixel row in response to drive control by the
Specifically, the
The
The
The
< example of Pixel construction >
Next, an example of the configuration of the pixels provided in the
Fig. 2 shows a cross section of one
The
For example, in the drawing, the thickness of the
In addition, the
Here, in the relationship between the substrate concentration and the resistance of the
In the figure, an upper front surface of a
Further, in the
In this example, light from the outside is incident on the
The solid-
On the surface of the
In this example, the
Here, the signal extraction section 65-1 includes an N + semiconductor region 71-1 as an N-type semiconductor region, an N-semiconductor region 72-1 having a donor impurity concentration lower than the N + semiconductor region 71-1, a P + semiconductor region 73-1 as a P-type semiconductor region, and a P-semiconductor region 74-1 having an acceptor impurity concentration lower than the P + semiconductor region 73-1. Here, examples of the donor impurity include an element belonging to group 5 of the periodic table, such As phosphorus (P) or arsenic (As) with respect to Si, and examples of the acceptor impurity include an element belonging to
In other words, the N + semiconductor region 71-1 is formed in a position adjacent to the right side of the
Further, a P + semiconductor region 73-1 is formed in a front inner side portion of the surface of the
Further, although not shown here, in more detail, when the
Similarly, the signal extraction section 65-2 includes an N + semiconductor region 71-2 as an N-type semiconductor region, an N-semiconductor region 72-2 having a donor impurity concentration lower than the N + semiconductor region 71-2, a P + semiconductor region 73-2 as a P-type semiconductor region, and a P-semiconductor region 74-2 having an acceptor impurity concentration lower than the P + semiconductor region 73-2.
In other words, the N + semiconductor region 71-2 is formed in a position adjacent to the left side of the
Further, a P + semiconductor region 73-2 is formed in a front inner side portion of the surface of the
Further, although not shown here, in more detail, when the
Hereinafter, the signal extraction section 65-1 and the signal extraction section 65-2 are also simply referred to as the signal extraction section 65 without particularly distinguishing the signal extraction section 65-1 and the signal extraction section 65-2.
Further, hereinafter, in the case where it is not necessary to particularly distinguish the N + semiconductor region 71-1 from the N + semiconductor region 71-2, the N + semiconductor region 71-1 and the N + semiconductor region 71-2 are also simply referred to as the N + semiconductor region 71, and in the case where it is not necessary to particularly distinguish the N-semiconductor region 72-1 from the N-semiconductor region 72-2, the N-semiconductor region 72-1 and the N-semiconductor region 72-2 are also simply referred to as the N-semiconductor region 72.
Further, hereinafter, in the case where it is not necessary to particularly distinguish between the P + semiconductor region 73-1 and the P + semiconductor region 73-2, the P + semiconductor region 73-1 and the P + semiconductor region 73-2 are also simply referred to as the P + semiconductor region 73, and in the case where it is not necessary to particularly distinguish between the P-semiconductor region 74-1 and the P-semiconductor region 74-2, the P-semiconductor region 74-1 and the P-semiconductor region 74-2 are also simply referred to as the P-semiconductor region 74.
Further, in the
The N + semiconductor region 71 provided on the
In the
Similarly, another FD section (hereinafter, particularly referred to as FD section B) different from FD section a is directly connected to N + semiconductor region 71-2, and FD section B is connected to
For example, in the case of measuring the distance to the subject by the indirect ToF method, infrared light is emitted toward the subject from the imaging device provided with the solid-
At this time, the
For example, at a certain timing, the
Then, an electric field is generated between the two P + semiconductor regions 73 in the
Therefore, in this state, in the case where infrared light (reflected light) from the outside is incident on the
In this case, electrons generated by photoelectric conversion serve as signal carriers for detecting a signal corresponding to the amount of infrared light incident on the
With this configuration, electric charges corresponding to electrons moved into the N + semiconductor region 71-1 are accumulated in the N + semiconductor region 71-1, and the electric charges are detected by the
In other words, the accumulated charges of the N + semiconductor region 71-1 are transferred to the FD section a directly connected to the N + semiconductor region 71-1, and signals corresponding to the charges transferred to the FD section a are read out by the
The pixel signal is a signal representing the amount of charge corresponding to the electrons detected by the N + semiconductor region 71-1, that is, the amount of charge accumulated on the FD portion a. In other words, the pixel signal may be a signal representing the amount of infrared light received by the
Further, at this time, as with the N + semiconductor region 71-1, a pixel signal corresponding to an electron detected by the N + semiconductor region 71-2 can be appropriately used for measuring the distance.
Further, at the next timing, a voltage is applied to the two P + semiconductor regions 73 via the contact points or the like by the
With this configuration, an electric field is generated between the two P + semiconductor regions 73 of the
In this state, in a case where infrared light (reflected light) from the outside is incident on the
With this configuration, electric charges corresponding to electrons moved into the N + semiconductor region 71-2 are accumulated in the N + semiconductor region 71-2, and the electric charges are detected by the
In other words, the accumulated charges of the N + semiconductor region 71-2 are transferred to the FD section B directly connected to the N + semiconductor region 71-2, and signals corresponding to the charges transferred to the FD section B are read out by the
Further, at this time, as with the N + semiconductor region 71-2, a pixel signal corresponding to an electron detected by the N + semiconductor region 71-1 can be appropriately used for measuring the distance.
Therefore, when pixel signals obtained by photoelectric conversion of different time periods are obtained in the
Therefore, a method of classifying signal carriers into different N + semiconductor regions 71 and calculating distance information based on signals corresponding to the signal carriers is referred to as an indirect ToF method.
Further, here, the following examples have been explained: the application of the voltage with respect to the P + semiconductor region 73 is controlled by the
Further, when a part of the signal extraction section 65 of the
In the example shown in fig. 3, the oxide film 64 (not shown) is formed in the central portion of the
Then, in each signal extraction section 65, the P + semiconductor region 73 is formed in a rectangular shape at the central position, and the P + semiconductor region 73 is surrounded by the rectangular N + semiconductor region 71, more specifically, a rectangular frame shape surrounding the P + semiconductor region 73. In other words, the N + semiconductor region 71 is formed to surround the P + semiconductor region 73.
Further, in the
Therefore, the infrared light is condensed to a position between the signal extraction section 65-1 and the signal extraction section 65-2. With this configuration, it is possible to prevent the infrared light from being incident on the pixels adjacent to the
For example, in the case where infrared light is directly incident on the signal extraction section 65, the charge separation efficiency (i.e., the contrast (Cmod) between the active well and the inactive well) or the Modulation contrast (Modulation contrast) decreases.
Here, the signal extraction section 65 (well) that reads out a signal corresponding to the electric charges (electrons) obtained by photoelectric conversion, that is, the signal extraction section 65 that detects the electric charges obtained by photoelectric conversion is also referred to as an active well.
In contrast, the signal extraction section 65 (well) that does not substantially read out a signal corresponding to the electric charge obtained by photoelectric conversion, that is, the signal extraction section 65 that is not an active well, is also referred to as an inactive well.
In the above example, the signal extraction section 65 that applies a voltage of 1.5V to the P + semiconductor region 73 is an active well, and the signal extraction section 65 that applies a voltage of 0V to the P + semiconductor region 73 is an inactive well.
Cmod is an index indicating what% of charges can be detected by the N + semiconductor region 71 of the signal extraction section 65 as an active well among the charges generated by photoelectric conversion of incident infrared light, that is, whether or not a signal corresponding to the charges is extracted, and indicates the charge separation efficiency.
Therefore, for example, in the case where infrared light incident from the outside is incident on the region of the inactivation well and photoelectric conversion is performed in the inactivation well, there is a high possibility that electrons, which are signal carriers generated by the photoelectric conversion, move to the N + semiconductor region 71 within the inactivation well. Then, the charge of a part of the electrons obtained by photoelectric conversion is not detected by the N + semiconductor region 71 in the active well, and thus Cmod (i.e., charge separation efficiency) is reduced.
Therefore, in the
According to the solid-
In other words, first, the solid-
For example, as shown by an arrow W11 in fig. 4, a general front-illuminated image sensor has a structure in which a wiring 102 or a wiring 103 is formed on an incident surface side of a PD101 as a photoelectric conversion unit on which light from the outside is incident.
For this reason, for example, as shown by an arrow a21 or an arrow a22, a part of light incident on the PD101 obliquely at a certain angle from the outside may not be incident on the PD101 due to shielding of the wiring 102 or the wiring 103.
In contrast, the back-illuminated image sensor, for example, as shown by an arrow W12, has a structure in which a wiring 105 or a wiring 106 is formed on a surface of the PD104 as a photoelectric conversion unit on the side opposite to the incident face on which light is incident from the outside.
Therefore, a sufficient aperture ratio can be ensured compared to the front-illuminated image sensor. That is, for example, as shown by an arrow a23 or an arrow a24, light incident on the PD104 obliquely at a certain angle from the outside is incident on the PD104 without being blocked by the wiring. With this configuration, the pixel sensitivity can be improved by receiving more light.
Such an improvement effect of the pixel sensitivity obtained by the back-illuminated image sensor can also be obtained in the solid-
In addition, for example, in the front-illuminated CAPD sensor, as indicated by an arrow W13, in the PD 111 as the photoelectric conversion unit, a signal extraction portion 112 called a well, more specifically, a P + semiconductor region or an N + semiconductor region of the well is formed on the incident surface side on which light from the outside is incident. In addition, the front-illuminated CAPD sensor has a structure in which a wiring 113 or a wiring 114 (e.g., a contact point or a metal) connected to the signal extraction section 112 is formed on the incident surface side.
For this reason, for example, there is a case where, as shown by an arrow a25 or an arrow a26, a part of light obliquely incident on the PD 111 at an angle from the outside is not incident on the PD 111 due to being shielded by the wiring 113 or the like, and, as shown by an arrow a27, light perpendicularly incident on the PD 111 is also not incident on the PD 111 due to being shielded by the wiring 114.
In contrast, the back-illuminated CAPD sensor, for example, as shown by an arrow W14, has a structure in which a signal extraction section 116 is formed in a part of the surface of the PD115, which is a photoelectric conversion unit, on the side opposite to the incident face on which light from the outside is incident. In addition, a wiring 117 or a wiring 118 (e.g., a contact point or a metal) connected to the signal extraction section 116 is formed on the surface of the PD115 on the side opposite to the incident surface.
Here, the PD115 corresponds to the
In the back-illuminated CAPD sensor having such a configuration, a sufficient aperture ratio can be secured as compared with the front-illuminated sensor. Accordingly, it is possible to maximize Quantum Efficiency (QE) × aperture ratio (FF) and improve distance measurement characteristics.
In other words, for example, as shown by an arrow a28 or an arrow a29, light obliquely incident on the PD115 at an angle from the outside is incident on the PD115 without being shielded by the wiring. Similarly, as shown by an arrow a30, light perpendicularly incident on the PD115 is also incident on the PD115 without being shielded by wiring or the like.
Therefore, in the back-illuminated CAPD sensor, not only light incident at a certain angle but also light perpendicularly incident on the PD115 and reflected on wiring or the like connected to the signal extraction section (well) in the front-illuminated sensor can be received. With this configuration, the pixel sensitivity can be improved by receiving more light. In other words, Quantum Efficiency (QE) × aperture ratio (FF) can be maximized, thereby improving distance measurement characteristics.
In particular, in the case where the well is arranged in the vicinity of the center of the pixel but not at the outer edge of the pixel, in the front-side illumination type CAPD sensor, it is impossible to secure a sufficient aperture ratio and the pixel sensitivity is reduced, but in the solid-
In addition, the signal extraction portion 65 is formed in the vicinity of the surface of the
Fig. 5 shows a cross-sectional view of a pixel of a front-illuminated CAPD sensor and a back-illuminated CAPD sensor.
In the front-illuminated CAPD sensor on the left side in fig. 5, the upper side of the substrate 141 is a light incident surface in the drawing, and a
In the back-illuminated CAPD sensor on the right side of fig. 5, in the drawing, a
Further, in fig. 5, a gray trapezoid shows a region in which infrared light is condensed by the on-
For example, in the front-illuminated CAPD sensor, the region R11 in which the inactive well and the active well are present is provided on the incident surface side of the substrate 141. For this reason, in the case where there are many components directly incident on the inactive well and photoelectric conversion is performed in the region of the inactive well, signal carriers obtained by the photoelectric conversion are not detected by the N + semiconductor region of the active well.
In the front-illuminated CAPD sensor, the intensity of infrared light is strong in the region R11 near the incident surface of the substrate 141, and therefore, the probability of performing photoelectric conversion of infrared light in the region R11 increases. That is, the amount of infrared light incident in the vicinity of the inactive well is large, and therefore, signal carriers that cannot be detected in the active well increase, and the charge separation efficiency decreases.
In contrast, in the back-illuminated CAPD sensor, the region R12 where the inactive well and the active well are present is provided at a position distant from the incident surface of the
In this example, the region R12 is provided in a part of the surface of the
In a region where the intensity of infrared light is strong, such as near the center or near the incident surface of the
On the other hand, in the vicinity of the region R12 including the inactive well, the intensity of incident infrared light is relatively weak, and therefore, the probability of performing photoelectric conversion of infrared light in the region R12 is reduced. That is, the amount of infrared light incident in the vicinity of the inactivation well is small, and therefore, the number of signal carriers (electrons) generated by photoelectric conversion in the vicinity of the inactivation well and moving to the N + semiconductor region of the inactivation well is reduced, and the charge separation efficiency can be improved. As a result, the distance measurement characteristic can be improved.
Further, in the back-illuminated solid-
For example, in the front-side illumination type CAPD sensor, the aperture ratio cannot be sufficiently secured, and therefore, as shown by an arrow W31 in fig. 6, in order to secure a higher quantum efficiency and suppress a decrease in the quantum efficiency × aperture ratio, the thickness of the
Then, in a region near the surface of the
In addition, when the
Fig. 7 shows the relationship between the position in the thickness direction of the
Therefore, in the case where the
In contrast, in the back-illuminated CAPD sensor, a sufficient aperture ratio can be ensured, and therefore, for example, as shown by an arrow W32 in fig. 6, even in the case where the substrate 172 is thin, a sufficient quantum efficiency × aperture ratio can be ensured. Here, the substrate 172 corresponds to the
Fig. 8 shows the relationship between the position in the thickness direction of the substrate 172 and the moving speed of the signal carriers.
Therefore, in the case where the thickness of the substrate 172 in the direction perpendicular to the substrate 172 is thin, the electric field in the direction substantially perpendicular to the substrate 172 becomes strong, and only electrons (charges) in the drift current region where the moving speed of the signal carriers is fast are used, and electrons in the diffusion current region where the moving speed of the signal carriers is slow are not used. Only electrons (charges) in only the drift current region are used, and therefore, the time required to detect signal carriers in the N + semiconductor region of the active well after performing photoelectric conversion becomes short. When the substrate 172 is thinned, the movement distance of the signal carriers to the N + semiconductor region in the active well is also shortened.
Therefore, in the back-illuminated CAPD sensor, even when the driving frequency is high, the signal carriers (electrons) generated in the respective regions within the substrate 172 can be sufficiently attracted in the N + semiconductor region of the active well, and the extraction efficiency of electrons can be improved.
Further, according to the thinning of the substrate 172, sufficient electron extraction efficiency can be ensured even at a higher driving frequency, and high-speed driving resistance can be improved.
In particular, in the back-illuminated CAPD sensor, a voltage can be directly applied to the substrate 172 (i.e., the substrate 61), and therefore, the response speed of switching between the active state and the inactive state of the well is fast, and driving can be performed at a high driving frequency. In addition, the voltage may be directly applied to the
Further, in the back-illuminated solid-state imaging element 11(CAPD sensor), a sufficient aperture ratio can be obtained, and therefore, the pixel can be miniaturized, and the miniaturization tolerance of the pixel can be improved.
In addition, the solid-
< modification 1 of the first embodiment >
< example of Pixel construction >
Further, in the above description, as shown in fig. 3, the case where a part of the signal extraction section 65 in the
Specifically, for example, as shown in fig. 9, the N + semiconductor region 71 and the P + semiconductor region 73 may be circular in shape. Further, in fig. 9, the same reference numerals are applied to portions corresponding to those in fig. 3, and the description thereof will be appropriately omitted.
Fig. 9 shows the N + semiconductor region 71 and the P + semiconductor region 73 when a part of the signal extraction section 65 in the
In this example, the oxide film 64 (not shown) is formed in a central portion of the
Then, in each signal extraction section 65, a circular P + semiconductor region 73 is formed at the center position, and the P + semiconductor region 73 is surrounded by a circular N + semiconductor region 71, more specifically, a ring-shaped N + semiconductor region 71, centering on the P + semiconductor region 73.
Fig. 10 is a plan view of the on-
As shown in fig. 10, an on-
Further, in fig. 2, the separation portion 75 including an oxide film or the like is disposed between the N + semiconductor region 71 and the P + semiconductor region 73, but the separation portion 75 is not limited thereto.
<
< example of Pixel construction >
Fig. 11 is a plan view showing a modification of the planar shape of the signal extraction section 65 in the
The planar shape of the signal extraction section 65 may be, for example, an octagonal shape shown in fig. 11, in addition to the rectangular shape shown in fig. 3 and the circular shape shown in fig. 7.
In addition, fig. 11 shows a plan view in the case where a separation portion 75 including an oxide film or the like is formed between the N + semiconductor region 71 and the P + semiconductor region 73.
In FIG. 11, the line A-A 'represents a cross-sectional line in FIG. 37 described later, and the line B-B' represents a cross-sectional line in FIG. 36 described later.
< second embodiment >
< example of Pixel construction >
Further, in the above description, in the signal extraction section 65, the configuration in which the P + semiconductor region 73 is surrounded by the N + semiconductor region 71 has been described as an example, but the N + semiconductor region may be surrounded by the P + semiconductor region.
In this case, the
Fig. 12 shows the arrangement of the N + semiconductor region and the P + semiconductor region when a part of the signal extraction section 65 in the
In this example, an oxide film 64 (not shown) is formed in a central portion of the
In the signal extraction section 65-1, a rectangular N + semiconductor region 201-1 corresponding to the N + semiconductor region 71-1 shown in fig. 3 is formed in the center of the signal extraction section 65-1. Thus, the N + semiconductor region 201-1 is surrounded by the rectangular P + semiconductor region 202-1, more specifically, the P + semiconductor region 202-1 of the rectangular frame shape corresponding to the P + semiconductor region 73-1 shown in fig. 3. That is, the P + semiconductor region 202-1 is formed to surround the N + semiconductor region 201-1.
Similarly, in the signal extraction section 65-2, a rectangular N + semiconductor region 201-2 corresponding to the N + semiconductor region 71-2 shown in fig. 3 is formed in the center of the signal extraction section 65-2. Thus, the N + semiconductor region 201-2 is surrounded by the rectangular P + semiconductor region 202-2, more specifically, the P + semiconductor region 202-2 of the rectangular frame shape corresponding to the P + semiconductor region 73-2 shown in fig. 3.
Further, hereinafter, in the case where it is not necessary to particularly distinguish the N + semiconductor region 201-1 from the N + semiconductor region 201-2, the N + semiconductor region 201-1 and the N + semiconductor region 201-2 are also simply referred to as the N + semiconductor region 201. In addition, hereinafter, in the case where it is not necessary to particularly distinguish between the P + semiconductor region 202-1 and the P + semiconductor region 202-2, the P + semiconductor region 202-1 and the P + semiconductor region 202-2 are also simply referred to as the P + semiconductor region 202.
Even in the case where the signal extraction section 65 has the configuration shown in fig. 12, as in the case of the configuration shown in fig. 3, the N + semiconductor region 201 functions as a charge detection section for detecting the amount of signal carriers, and the P + semiconductor region 202 functions as a voltage application section for generating an electric field by directly applying a voltage to the
< modification 1 of the second embodiment >
< example of Pixel construction >
In addition, as in the example shown in fig. 9, even in the case of the arrangement in which the N + semiconductor region 201 is surrounded by the P + semiconductor region 202, the shapes of the N + semiconductor region 201 and the P + semiconductor region 202 may be any shapes.
In other words, for example, as shown in fig. 13, the N + semiconductor region 201 and the P + semiconductor region 202 may be circular in shape. Further, in fig. 13, the same reference numerals are applied to portions corresponding to those in fig. 12, and the description thereof will be appropriately omitted.
Fig. 13 shows the N + semiconductor region 201 and the P + semiconductor region 202 when a part of the signal extraction section 65 in the
In this example, the oxide film 64 (not shown) is formed in a central portion of the
Then, in each signal extraction section 65, a circular N + semiconductor region 201 is formed at the center position, and the N + semiconductor region 201 is surrounded by a circular P + semiconductor region 202 centering on the N + semiconductor region 201, more specifically, a ring-shaped P + semiconductor region 202.
< third embodiment >
< example of Pixel construction >
Further, the N + semiconductor region and the P + semiconductor region formed in the signal extraction section 65 may be formed in a linear shape (rectangular shape).
In this case, for example, the
Fig. 14 shows the arrangement of the N + semiconductor region and the P + semiconductor when a part of the signal extraction section 65 in the
In this example, the oxide film 64 (not shown) is formed in the central portion of the
In the signal extraction section 65-1, a P + semiconductor region 231 having a line shape corresponding to the P + semiconductor region 73-1 shown in fig. 3 is formed in the center of the signal extraction section 65-1. Accordingly, N + semiconductor regions 232-1 of a line shape and N + semiconductor regions 232-2 of a line shape corresponding to the N + semiconductor regions 71-1 shown in fig. 3 are formed around the P + semiconductor region 231 such that the P + semiconductor region 231 is interposed between the N + semiconductor region 232-1 and the N + semiconductor region 232-2. That is, the P + semiconductor region 231 is formed at a position sandwiched between the N + semiconductor region 232-1 and the N + semiconductor region 232-2.
Further, hereinafter, in the case where it is not necessary to particularly distinguish the N + semiconductor region 232-1 from the N + semiconductor region 232-2, the N + semiconductor region 232-1 and the N + semiconductor region 232-2 are also simply referred to as the N + semiconductor region 232.
In the example shown in fig. 3, the P + semiconductor region 73 is surrounded by the N + semiconductor region 71, but in the example shown in fig. 14, the P + semiconductor region 231 is interposed between two N + semiconductor regions 232 disposed adjacent to each other.
Similarly, in the signal extraction section 65-2, a P + semiconductor region 233 corresponding to the line shape of the P + semiconductor region 73-2 shown in FIG. 3 is formed in the center of the signal extraction section 65-2. Accordingly, the N + semiconductor region 234-1 of a line shape and the N + semiconductor region 234-2 of a line shape corresponding to the N + semiconductor region 71-2 shown in fig. 3 are formed around the P + semiconductor region 233 such that the P + semiconductor region 233 is interposed between the N + semiconductor region 234-1 and the N + semiconductor region 234-2.
Further, hereinafter, in the case where it is not necessary to particularly distinguish the N + semiconductor region 234-1 from the N + semiconductor region 234-2, the N + semiconductor region 234-1 and the N + semiconductor region 234-2 are also simply referred to as the N + semiconductor region 234.
In the signal extraction section 65 of fig. 14, the P + semiconductor region 231 and the P + semiconductor region 233 function as a voltage application section corresponding to the P + semiconductor region 73 shown in fig. 3, and the N + semiconductor region 232 and the N + semiconductor region 234 function as a charge detection section corresponding to the N + semiconductor region 71 shown in fig. 3. In this case, for example, both the N + semiconductor region 232-1 and the N + semiconductor region 232-2 are connected to the FD portion a.
In addition, in the drawing, the length of each of the line-shaped P + semiconductor region 231, the line-shaped N + semiconductor region 232, the line-shaped P + semiconductor region 233, and the line-shaped N + semiconductor region 234 in the lateral direction may be any length, and the regions may not have the same length.
< fourth embodiment >
< example of Pixel construction >
Further, in the example shown in fig. 14, the structure in which the P + semiconductor region 231 or the P + semiconductor region 233 is interposed between the N + semiconductor regions 232 or the N + semiconductor regions 234 has been explained as an example, but in contrast, the N + semiconductor region may be interposed between the P + semiconductor regions.
In this case, for example, the
Fig. 15 shows the arrangement of the N + semiconductor region and the P + semiconductor region when a part of the signal extraction section 65 in the
In this example, the oxide film 64 (not shown) is formed in a central portion of the
In the signal extraction section 65-1, an N + semiconductor region 261 corresponding to the line shape of the N + semiconductor region 71-1 shown in fig. 3 is formed in the center of the signal extraction section 65-1. Accordingly, a P + semiconductor region 262-1 of a line shape and a P + semiconductor region 262-2 of a line shape corresponding to the P + semiconductor region 73-1 shown in fig. 3 are formed around the N + semiconductor region 261 such that the N + semiconductor region 261 is interposed between the P + semiconductor region 262-1 and the P + semiconductor region 262-2. That is, the N + semiconductor region 261 is formed at a position sandwiched by the P + semiconductor region 262-1 and the P + semiconductor region 262-2.
Further, hereinafter, in the case where it is not necessary to particularly distinguish the P + semiconductor region 262-1 from the P + semiconductor region 262-2, the P + semiconductor region 262-1 and the P + semiconductor region 262-2 are also simply referred to as the P + semiconductor region 262.
Similarly, in the signal extraction section 65-2, an N + semiconductor region 263 corresponding to the line shape of the N + semiconductor region 71-2 shown in fig. 3 is formed in the center of the signal extraction section 65-2. Then, a P + semiconductor region 264-1 of a line shape and a P + semiconductor region 264-2 of a line shape corresponding to the P + semiconductor region 73-2 shown in fig. 3 are formed around the N + semiconductor region 263 such that the N + semiconductor region 263 is interposed between the P + semiconductor region 264-1 and the P + semiconductor region 264-2.
Further, hereinafter, in the case where it is not necessary to particularly distinguish the P + semiconductor region 264-1 from the P + semiconductor region 264-2, the P + semiconductor region 264-1 and the P + semiconductor region 264-2 are also simply referred to as the P + semiconductor region 264.
In the signal extraction section 65 of fig. 15, the P + semiconductor region 262 and the P + semiconductor region 264 function as a voltage application section corresponding to the P + semiconductor region 73 shown in fig. 3, and the N + semiconductor region 261 and the N + semiconductor region 263 function as a charge detection section corresponding to the N + semiconductor region 71 shown in fig. 3. Further, in the figure, the length of each of the line-shaped N + semiconductor region 261, the line-shaped P + semiconductor region 262, the line-shaped N + semiconductor region 263, and the line-shaped P + semiconductor region 264 may be any length in the lateral direction, and each region may not have the same length.
< fifth embodiment >
< example of Pixel construction >
Further, in the above description, an example has been described in which each of the two signal extraction sections 65 is provided in each pixel constituting the
For example, in the case where one signal extraction section is formed in a pixel, for example, a pixel portion is configured as shown in fig. 16. Further, in fig. 16, the same reference numerals are applied to portions corresponding to those in fig. 3, and the description thereof will be appropriately omitted.
Fig. 16 shows the arrangement of the N + semiconductor region and the P + semiconductor region when a part of the signal extraction section in a part of the pixels provided in the
In this example, a
In other words, in the
Here, the P + semiconductor region 301 corresponds to the P + semiconductor region 73 shown in fig. 3, and functions as a voltage applying portion. In addition, the N + semiconductor region 302 corresponds to the N + semiconductor region 71 shown in fig. 3, and functions as a charge detection section. Further, the P + semiconductor region 301 or the N + semiconductor region 302 may be any shape.
In addition, the pixels 291-1 to 291-3 around the
In other words, for example, one signal extraction section 303 is formed in the central portion of the pixel 291-1. Then, in the signal extraction section 303, a circular P + semiconductor region 304 is formed at a central position, and the P + semiconductor region 304 is surrounded by a circular N + semiconductor region 305 centering on the P + semiconductor region 304, more specifically, a ring-shaped N + semiconductor region 305.
The P + semiconductor region 304 and the N + semiconductor region 305 correspond to the P + semiconductor region 301 and the N + semiconductor region 302, respectively.
Further, hereinafter, the pixels 291-1 to 291-3 are also simply referred to as the pixels 291 without particularly distinguishing the pixels 291-1 to 291-3.
Therefore, in the case where one signal extraction section (well) is formed in each pixel, several pixels adjacent to each other are used in measuring the distance to the object by the indirect ToF method, and distance information is calculated based on pixel signals obtained with respect to the pixels.
For example, the
As an example, the pixel 291-1, the pixel 291-3, and the like are driven so that the signal extraction portions of the pixels adjacent to the
After that, in the case where the voltage to be applied is switched so that the signal extraction section 65 of the
Then, the distance information is calculated based on the pixel signal read from the signal extraction unit 65 in the state where the signal extraction unit 65 is an active well and the pixel signal read from the signal extraction unit 303 in the state where the signal extraction unit 303 is an active well.
Therefore, even in the case where the number of signal extraction sections (wells) provided in a pixel is one, the distance can be measured according to the indirect ToF method by using pixels adjacent to each other.
< sixth embodiment >
< example of Pixel construction >
As described above, three or more signal extraction units (wells) may be provided in each pixel.
For example, in the case where four signal extraction units (wells) are provided in a pixel, each pixel of the
Fig. 17 shows the arrangement of the N + semiconductor region and the P + semiconductor region when a part of the signal extraction sections of a part of the pixels provided in the
The cross-sectional view of the line C-C' shown in fig. 17 is fig. 36 described later.
In this example, the
In other words, in the
The signal extraction sections 331-1 to 331-4 correspond to the signal extraction section 65 shown in fig. 16.
For example, in the signal extraction section 331-1, a circular P +
Here, the P +
In addition, the signal extraction sections 331-2 to 331-4 also have a configuration similar to that of the signal extraction section 331-1, and include a P + semiconductor region serving as a voltage application section and an N + semiconductor region serving as a charge detection section, respectively. In addition, the pixel 291 formed around the
Further, hereinafter, the signal extraction sections 331-1 to 331-4 are also simply referred to as signal extraction sections 331 without particularly distinguishing the signal extraction sections 331-1 to 331-4.
Therefore, in the case where four signal extraction sections are provided in each pixel, for example, when the distance is measured according to the indirect ToF method, the distance information is calculated by using the four signal extraction sections in the pixel.
As an example, the
After that, the voltage to be applied to each signal extraction section 331 is switched. That is, the
Then, in a state where the signal extraction unit 331-1 and the signal extraction unit 331-3 are active wells, distance information is calculated based on the pixel signals read out from the signal extraction unit 331-1 and the signal extraction unit 331-3 and the pixel signals read out from the signal extraction unit 331-2 and the signal extraction unit 331-4 in a state where the signal extraction unit 331-2 and the signal extraction unit 331-4 are active wells.
< seventh embodiment >
< example of Pixel construction >
Further, the signal extraction section (well) may be shared between pixels adjacent to each other in the
In this case, each pixel of the
Fig. 18 shows the arrangement of the N + semiconductor region and the P + semiconductor region when a part of the signal extraction sections of a part of the pixels provided in the
In this example, the
For example, in the
The
In the
In particular, in this example, the P +
Here, the P +
In addition, the P +
In the
The P +
As described above, even in the case where the signal extraction section (well) is shared between adjacent pixels, the distance can be measured by the indirect ToF method according to an operation similar to the example shown in fig. 3.
As shown in fig. 18, in the case where the signal extraction section is shared between the pixels, for example, the distance between a pair of P + semiconductor regions for generating an electric field (i.e., current) becomes long, as is the distance between the P +
With this configuration, current hardly flows between the P + semiconductor regions, and therefore, power consumption of the pixel can be reduced, and miniaturization of the pixel is also facilitated.
Further, here, an example has been described in which one signal extraction section is shared between two pixels adjacent to each other, but one signal extraction section may be shared among three or more pixels adjacent to each other. In addition, in the case where the signal extraction portion is shared among two or more pixels adjacent to each other, only the charge detection portion for detecting signal carriers may be shared among the signal extraction portions, or only the voltage application portion for generating an electric field may be shared.
< eighth embodiment >
< example of Pixel construction >
Further, the on-chip lens or the interpixel light-shielding portion provided in each pixel such as the
Specifically, for example, the
The configuration of the
In the
< modification 1 of the eighth embodiment >
< example of Pixel construction >
In addition, the configuration of the
The configuration of the
In the example shown in fig. 20, the interpixel light-shielding
Further, it is apparent that neither the on-
<
< example of Pixel construction >
In addition, for example, as shown in fig. 21, the thickness of the on-chip lens in the optical axis direction can be optimized. Further, in fig. 21, the same reference numerals are applied to portions corresponding to those in fig. 2, and the description thereof will be appropriately omitted.
The configuration of the
In the
Generally, as the on-chip lens disposed on the front surface of the
< ninth embodiment >
< example of Pixel construction >
Further, a separation region for improving separation characteristics between adjacent pixels and for suppressing color mixture may be provided between the pixels formed in the
In this case, the
The configuration of the
In the
For example, when the separation region 441 is formed, a vertical groove (trench) is formed in a downward direction (in a direction perpendicular to the surface of the substrate 61) from the incident surface side of the
The buried separation region 441 is formed as described above, and therefore, the separation characteristic of infrared light between pixels can be improved, and the occurrence of color mixture can be suppressed.
< modification 1 of the ninth embodiment >
< example of Pixel construction >
Further, in the case where the buried separation region is formed in the
The configuration of the
In the
For example, when the
According to the buried
< tenth embodiment >
< example of Pixel construction >
Further, the thickness of the substrate on which the signal extraction section 65 is formed may be set according to various characteristics of the pixels and the like.
Therefore, for example, as shown in fig. 24, the
The configuration of the
That is, in the
The
Further, the film thickness and the like of various layers (films) formed appropriately on the incident surface side and the like of the
< eleventh embodiment >
< example of Pixel construction >
Further, in the above description, an example has been described in which the substrate constituting the
The configuration of the
In the
In addition, the
The thickness of the
In addition, the
Here, in the relationship between the substrate concentration and the resistance of the
Therefore, even in the case where the
< twelfth embodiment >
< example of Pixel construction >
As in the example described with reference to fig. 24, the thickness of the N-type semiconductor substrate may be set according to various characteristics of the pixel.
Therefore, for example, as shown in fig. 26, a substrate 561 constituting the
The configuration of the
In other words, in the
The substrate 561 includes, for example, an N-type semiconductor substrate having a thickness of 20 μm or more, the substrate 561 and the
< thirteenth embodiment >
< example of Pixel construction >
In addition, for example, a bias voltage is applied to the incident surface side of the
In this case, for example, the
In fig. 27, the
In contrast, the configuration of the
In the example shown by the arrow W62, the P +
For example, a film having positive fixed charges is laminated, and the P +
Here, a bias is applied by applying a voltage of 0V or less to the P +
Further, the configuration for applying a voltage to the incident surface side of the
< fourteenth embodiment >
< example of Pixel construction >
Further, in order to improve the sensitivity of the
In this case, the
The configuration of the
In the example shown in fig. 28, a reflecting
The
Accordingly, the reflecting
< fifteenth embodiment >
< example of Pixel construction >
Further, a P-well region including a P-type semiconductor region may be provided in the
In this case, the
The configuration of the
In the example shown in fig. 29, a P-
< sixteenth embodiment >
< example of Pixel construction >
In addition, a P-well region including a P-type semiconductor region may be provided in addition to the
In this case, the
The configuration of the
As described above, according to the present technology, the CAPD sensor is configured as a backside illumination type sensor, and therefore, characteristics such as pixel sensitivity can be improved.
< example of equivalent Circuit construction of Pixel >
Fig. 31 shows an equivalent circuit of the
With respect to the signal extraction section 65-1 including the N + semiconductor region 71-1, the P + semiconductor region 73-1, and the like, the
In addition, with respect to the signal extraction section 65-2 including the N + semiconductor region 71-2, the P + semiconductor region 73-2, and the like, the
The
The N + semiconductor regions 71-1 and 71-2 are charge detection sections that detect charges generated by performing photoelectric conversion on light incident on the
In the case where the drive signal TRG to be supplied to the gate electrode is in an activated state, the
The FD 722A temporarily holds the electric charge supplied from the N + semiconductor region 71-1. The FD722B temporarily holds the electric charge supplied from the N + semiconductor region 71-2. The FD 722A corresponds to the FD part a described with reference to fig. 2, and the FD722B corresponds to the FD part B.
In the case where the drive signal RST to be supplied to the gate is in an active state, the
In the
The
The
For example, the
< other equivalent Circuit configuration example of pixels >
Fig. 32 shows another equivalent circuit of the
In fig. 32, the same reference numerals are applied to portions corresponding to those in fig. 31, and the description thereof will be omitted as appropriate.
The equivalent circuit in fig. 32 corresponds to the equivalent circuit in fig. 31, and an additional capacitance 727 and a switching transistor 728 controlling connection thereof are added to the signal extraction sections 65-1 and 65-2.
Specifically, the additional capacitance 727A is connected between the
In the case where the drive signal FDG to be supplied to the gate is in an activated state, the switching
For example, under high illuminance with a large amount of incident light, the
On the other hand, in low illuminance with a small amount of incident light, the
As in the equivalent circuit in fig. 31, the additional capacitance 727 may be omitted, but the additional capacitance 727 is provided and used differently according to the amount of incident light, and thus a high dynamic range may be secured.
< example of arrangement of Voltage supply lines >
Next, the arrangement of voltage supply lines for applying a predetermined voltage MIX0 or MIX1 to the P + semiconductor regions 73-1 and 73-2 as the voltage applying section of the signal extracting section 65 of each
In fig. 33 and 34, the circular configuration shown in fig. 9 will be adopted as the configuration of the signal extraction section 65 of each
Fig. 33 a is a plan view showing a first arrangement example of the voltage supply lines.
In the first arrangement example, the voltage supply line 741-1 or 741-2 is wired between (on the boundary) two pixels adjacent in the horizontal direction along the vertical direction with respect to the plurality of
The voltage supply line 741-1 is connected to the P + semiconductor region 73-1 of the signal extraction section 65-1 which is one of the two signal extraction sections 65 within the
In the first arrangement example, the two voltage supply lines 741-1 and 741-2 are arranged for two columns of pixels, and therefore, in the
Fig. 33B is a plan view showing a second arrangement example of the voltage supply lines.
In the second arrangement example, two voltage supply lines 741-1 and 741-2 are wired along the vertical direction with respect to one pixel column of a plurality of
The voltage supply line 741-1 is connected to the P + semiconductor region 73-1 of the signal extraction section 65-1 which is one of the two signal extraction sections 65 within the
In the second arrangement example, two voltage supply lines 741-1 and 741-2 are wired for one pixel column, and therefore, four voltage supply lines 741 are arranged for two columns of pixels. In the
Two configuration examples of a and B of fig. 33 are a periodic configuration in which the configuration of the P + semiconductor region 73-1 connecting the voltage supply line 741-1 to the signal extraction section 65-1 and the P + semiconductor region 73-2 connecting the voltage supply line 741-2 to the signal extraction section 65-2 is periodically repeated with respect to the pixels arranged in the vertical direction.
In the first configuration example of a of fig. 33, the number of voltage supply lines 741-1 and 741-2 to be wired with respect to the
In the second configuration example of B of fig. 33, the number of voltage supply lines 741-1 and 741-2 to be wired is increased as compared with the first configuration example, but the number of signal extraction sections 65 to be connected to one voltage supply line 741 becomes 1/2, and therefore, the load of wiring can be reduced, and the second configuration example is effective under high-speed driving or when the total pixel number of the
Fig. 34 a is a plan view showing a third arrangement example of the voltage supply lines.
The third configuration example is an example in which two voltage supply lines 741-1 and 741-2 are arranged with respect to two columns of pixels, as in the first configuration example of a of fig. 33.
The third configuration example is different from the first configuration example of a of fig. 33 in that the connection destinations of the signal extraction sections 65-1 and 65-2 are different in two pixels arranged in the vertical direction.
Specifically, for example, in a
Fig. 34B is a plan view showing a fourth arrangement example of the voltage supply lines.
The fourth configuration example is an example in which two voltage supply lines 741-1 and 741-2 are arranged with respect to two columns of pixels, as in the second configuration example of B of fig. 33.
The fourth configuration example is different from the second configuration example of B of fig. 33 in that the connection destinations of the signal extraction sections 65-1 and 65-2 are different in two pixels arranged in the vertical direction.
Specifically, for example, in a
In the third configuration example of a of fig. 34, the number of voltage supply lines 741-1 and 741-2 to be wired with respect to the
In the fourth configuration example of B of fig. 34, the number of voltage supply lines 741-1 and 741-2 to be wired is increased as compared with the third configuration example, but the number of signal extraction sections 65 to be connected to one voltage supply line 741 becomes 1/2, and therefore, the load of wiring can be reduced, and the fourth configuration example is effective under high-speed driving or when the total pixel number of the
Two configuration examples of a and B of fig. 34 are mirror image configurations in which the connection destinations of two pixels adjacent up and down (in the vertical direction) are mirror-inverted.
As shown in a of fig. 35, in the periodic configuration, voltages to be applied to two adjacent signal extraction sections 65 inserted into the pixel boundary are different voltages, and therefore, charge exchange between adjacent pixels occurs. For this reason, the transfer efficiency of the electric charges is more excellent in the periodic configuration than in the mirror configuration, but the color mixing characteristics of the adjacent pixels are worse in the periodic configuration than in the mirror configuration.
On the other hand, as shown in B of fig. 35, in the mirror configuration, the voltages to be applied to the two adjacent signal extracting sections 65 inserted in the pixel boundary are the same voltage, and therefore, the charge exchange between the adjacent pixels is suppressed. For this reason, the transfer efficiency of the charge is worse in the mirror configuration than in the periodic configuration, but the color mixing characteristics of the adjacent pixels are more excellent in the mirror configuration than in the periodic configuration.
< Cross-sectional Structure of multiple pixels according to the fourteenth embodiment >
In the cross-sectional configuration of the pixel shown in fig. 2 and the like, one of the N + semiconductor region 71-1 and the N-semiconductor region 72-1 surrounding the P + semiconductor region 73-1 and the P-semiconductor region 74-1 with the P + semiconductor region 73-1 and the P-semiconductor region 74-1 as the center is not shown. In addition, the multilayer wiring layer formed on the surface of the
Therefore, hereinafter, in the above-described several embodiments, cross-sectional views of a plurality of adjacent pixels are shown in which the N + semiconductor region 71-1 and the N-semiconductor region 72-1 region 74-1 or the multilayer wiring layer around the P + semiconductor region 73-1 and the P-semiconductor region 74-1 are not shown.
First, fig. 36 and 37 show sectional views of a plurality of pixels of the fourteenth embodiment shown in fig. 28.
The fourteenth embodiment shown in fig. 28 is a configuration of a pixel including a reflecting
Fig. 36 corresponds to a cross-sectional view of line B-B 'in fig. 11, and fig. 37 corresponds to a cross-sectional view of line a-a' in fig. 11. Further, a cross-sectional view of the line C-C' in FIG. 17 may be as shown in FIG. 36.
As shown in fig. 36, in each
In the signal extraction section 65-1, the N + semiconductor region 71-1 and the N-semiconductor region 72-1 are formed so as to surround the P + semiconductor region 73-1 and the P-semiconductor region 74-1 with the P + semiconductor region 73-1 and the P-semiconductor region 74-1 as the center. The P + semiconductor region 73-1 and the N + semiconductor region 71-1 are in contact with a
In the signal extraction section 65-2, the N + semiconductor region 71-2 and the N-semiconductor region 72-2 are formed so as to surround the P + semiconductor region 73-2 and the P-semiconductor region 74-2 with the P + semiconductor region 73-2 and the P-semiconductor region 74-2 as the center. The P + semiconductor region 73-2 and the N + semiconductor region 71-2 are in contact with the
The
A film having a positive fixed charge is laminated, and therefore, a P +
As shown in fig. 36, in the case where the on-
A
As shown in fig. 37, the pixel transistor Tr is formed in a pixel boundary region of a boundary surface portion of the
Among the five metal films M1 to M5 of the
Further, in this example, the reflecting member 815 (reflecting member 631) and the charge extraction wiring are disposed on the same layer of the metal film M1, but are not necessarily limited to being disposed on the same layer.
In the metal film M2 of the second layer from the
In the metal film M3 of the third layer from the
In the metal films M4 and M5 of the fourth layer and the fifth layer from the
Further, the planar arrangement of the five metal films M1 to M5 of the
< Cross-sectional Structure of multiple pixels in the ninth embodiment >
Fig. 38 is a sectional view showing the pixel structure of the ninth embodiment shown in fig. 22 with respect to a plurality of pixels, in which the N + semiconductor region 71-1 and the N-semiconductor region 72-1 or the multilayer wiring layer are not omitted.
The ninth embodiment shown in fig. 22 is a constitution of a pixel including a separation region 441 on a pixel boundary in a
Other configurations of the five-layer metal films M1 to M5 including the signal extraction sections 65-1 and 65-2, the
< Cross-sectional Structure of multiple pixels in modification 1 of the ninth embodiment >
Fig. 39 is a sectional view showing a pixel structure of modification 1 of the ninth embodiment shown in fig. 23 with respect to a plurality of pixels, in which the N + semiconductor region 71-1 and the N-semiconductor region 72-1 or the multilayer wiring layer are not omitted.
A modification 1 of the ninth embodiment shown in fig. 23 is a configuration including pixels of a
Other configurations of the five-layer metal films M1 to M5 including the signal extraction sections 65-1 and 65-2, the
< Cross-sectional Structure of multiple pixels according to the fifteenth embodiment >
Fig. 40 is a sectional view showing a pixel structure of the fifteenth embodiment shown in fig. 29 with respect to a plurality of pixels, in which the N + semiconductor region 71-1 and the N-semiconductor region 72-1 or the multilayer wiring layer are not omitted.
The fifteenth embodiment shown in fig. 29 is a configuration including a P-
Other configurations of the five-layer metal films M1 to M5 including the signal extraction sections 65-1 and 65-2, the
< Cross-sectional Structure of multiple pixels according to the tenth embodiment >
Fig. 41 is a sectional view showing a pixel structure of the tenth embodiment shown in fig. 24 with respect to a plurality of pixels, in which the N + semiconductor region 71-1 and the N-semiconductor region 72-1 or the multilayer wiring layer are not omitted.
The tenth embodiment shown in fig. 24 is a constitution in which a
Other configurations of the five-layer metal films M1 to M5 including the signal extraction sections 65-1 and 65-2, the
< example of planar arrangement of five metal films M1-M5 >
Next, a planar arrangement example of the five metal films M1 to M5 of the
Fig. 42 a shows an example of a planar arrangement of the metal film M1 as the first layer among the five metal films M1 to M5 in the
Fig. 42B shows an example of the planar arrangement of the metal film M2 as the second layer among the five metal films M1 to M5 in the
Fig. 42C shows an example of the planar arrangement of the metal film M3 as the third layer among the five metal films M1 to M5 in the
Fig. 43 a shows an example of a planar arrangement of the metal film M4 as the fourth layer among the five metal films M1 to M5 in the
B in fig. 43 shows an example of a planar arrangement of the metal film M5 as the fifth layer among the five metal films M1 to M5 in the
Further, in a to C of fig. 42 and a and B of fig. 43, the region of the
In a to C of fig. 42 and a and B of fig. 43, the longitudinal direction in the drawing is the vertical direction of the
As shown in a of fig. 42, a reflecting
In addition, the pixel
In addition, wires such as the ground line 832, the power supply line 833, and the ground line 834 are formed between the
Therefore, the metal film M1 of the first layer is symmetrically arranged in the region on the signal extraction section 65-1 side and the region on the signal extraction section 65-2 side within the pixel, and therefore, the wiring load is uniformly adjusted by the signal extraction sections 65-1 and 65-2. With this configuration, the drive variation in the signal extraction sections 65-1 and 65-2 is reduced.
In the metal film M1 of the first layer, a reflecting
As shown in B of FIG. 42, in a metal film M2 which is the second layer of a
In the metal film M2 of the second layer, the control line region 851 is arranged in the boundary region of the
In addition, a capacitance region 852 in which the FD722B or the additional capacitance 727A is formed is arranged in a predetermined region different from the control line region 851. In the capacitance region 852, the pattern of the metal film M2 is formed in the shape of comb teeth, and thus the FD722B or the additional capacitance 727A is constituted.
The FD722B or the additional capacitor 727A is disposed on the metal film M2 which is the second layer, and therefore, the pattern of the FD722B or the additional capacitor 727A can be freely disposed according to the wiring capacitance required in design, and the degree of freedom in design can be improved.
As shown in C of fig. 42, in the metal film M3 as the third layer of the
In the metal film M4 of the fourth layer and the metal film M5 of the fifth layer of the
The metal film M4 and the metal film M5 shown in a and B of fig. 43 show an example in the case of employing the voltage supply line 741 of the first configuration example shown in a of fig. 33.
The voltage supply line 741-1 of the metal film M4 is connected to the voltage-application wiring 814 (e.g., fig. 36) of the metal film M1 via the metal films M3 and M2, and the voltage-
The voltage supply lines 741-1 and 741-2 of the metal film M5 are connected to the driving section of the peripheral circuit section at the periphery of the
As described above, the
The solid-
< planar arrangement example of pixel transistor >
Fig. 44 is a plan view in which the metal film M1 of the first layer shown in a of fig. 42 and a polysilicon layer formed on the gate electrode or the like of the pixel transistor Tr formed on the metal film M1 overlap each other.
A of fig. 44 is a plan view in which the metal film M1 in C of fig. 44 and the polysilicon layer in B of fig. 44 overlap each other, B of fig. 44 is a plan view of only polysilicon, and C of fig. 44 is a plan view of only the metal film M1. The plan view of the metal film M1 in C of fig. 44 is the same as that shown in a of fig. 42, but the hatching is omitted.
As explained with reference to a of fig. 42, the pixel
As shown in B of fig. 44, for example, the pixel transistor Tr corresponding to each of the signal extraction sections 65-1 and 65-2 is arranged in the pixel
In B of fig. 44, with reference to an intermediate line (not shown) of the two signal extraction sections 65-1 and 65-2,
The wirings connected between the pixel transistors Tr of the metal film M1 shown in C of fig. 44 are connected symmetrically with respect to the vertical direction with reference to the intermediate line (not shown) of the two signal extraction sections 65-1.
Accordingly, the plurality of pixel transistors Tr in the pixel
< modification of reflecting
Next, a modification of the
In the above example, as shown in a of fig. 42, the reflecting
In contrast, for example, as shown in a of fig. 45, the
As shown in B of fig. 45, the
Further, B of fig. 45 is an example of a stripe shape in the vertical direction, but may be a stripe shape in the horizontal direction.
In addition, as shown in C of fig. 45, the reflecting
Further, as shown in a of fig. 46, since a part of the
B of fig. 46 shows the pattern of the metal film M1 in the case where the reflecting
< substrate configuration example of solid-state imaging element >
In the solid-
A of fig. 47 shows an example in which the solid-
In this case, a
The
In addition, as shown in B of fig. 47, in the solid-
In addition, as shown in C of fig. 47, in the solid-
As with the solid-
< example of distance measuring Module >
Fig. 48 is a block diagram showing an example of the configuration of a distance measurement module that outputs distance measurement information by using the solid-
The
The
The emission control signal CLKp is not limited to a rectangular wave, and may be a periodic signal. For example, the light emission control signal CLKp may be a sine wave.
The light
The
The above-described solid-
As described above, the solid-
As described above, according to the present technology, the CAPD sensor is configured as a back-illuminated light-receiving element, and therefore, the distance measurement characteristics can be improved.
Further, in the present technology, it is apparent that the above embodiments can be appropriately combined. In other words, for example, depending on what kind of characteristics such as the pixel sensitivity is prioritized, the number of signal extraction portions to be provided in the pixel or the arrangement position of the signal extraction portions, whether or not the shape or the shared structure of the signal extraction portions is set, the presence or absence of the on-chip lens, the presence or absence of the inter-pixel light shielding portion, the presence or absence of the separation region, the thickness of the on-chip lens or substrate, the kind or film design of the substrate, the presence or absence of the offset with respect to the incident surface, the presence or absence of the reflection member, and the like can be appropriately selected.
In addition, in the above description, an example using electrons as signal carriers has been described, but holes generated by photoelectric conversion may be used as signal carriers. In this case, it is sufficient that the charge detection section for detecting signal carriers includes a P + semiconductor region and the voltage application section for generating an electric field in the substrate includes an N + semiconductor region, and therefore, holes as signal carriers are detected in the charge detection section provided in the signal extraction section.
< application example of Mobile body >
The technique according to the present disclosure (present technique) can be applied to various products. For example, the technology according to the present disclosure is implemented as an apparatus to be mounted on any type of moving body such as an automobile, an electric automobile, a hybrid electric automobile, a motorcycle, a bicycle, a personal mobile device, an airplane, a drone, a ship, and a robot.
Fig. 49 is a block diagram of an exemplary configuration example of a vehicle control system as an example of a mobile body control system to which the technique according to the present disclosure can be applied.
The
The drive
The main body
Vehicle exterior
The
The in-vehicle
For example, the
In addition, the
In addition, the
The audio/
Fig. 50 is a diagram illustrating an example of the mounting position of the
In fig. 50, a vehicle 12100 includes imaging units 12101,12102,12103,12104 and 12105 as an
For example, the imaging portions 12101,12102,12103,12104 and 12105 are provided at positions such as the head of the vehicle 12100, side mirrors, a rear bumper, a rear door, and an upper portion of a windshield in the vehicle. An imaging section 12101 provided in the vehicle head and an imaging section 12105 provided in the upper portion of the windshield in the vehicle mainly obtain an image of the front of the vehicle 12100. The imaging portions 12102 and 12103 provided in the side view mirrors mainly obtain images of the sides of the vehicle 12100. An imaging portion 12104 provided in a rear bumper or a rear door mainly obtains an image of the rear of the vehicle 12100. The front images acquired by the imaging portions 12101 and 12105 are mainly used to detect a front vehicle, a pedestrian, an obstacle, a traffic signal, a traffic sign, a traffic lane, and the like.
Further, FIG. 50 shows examples of imaging ranges of the imaging sections 12101 to 12104. The imaging range 12111 represents an imaging range of an imaging portion 12101 provided in the vehicle head, the imaging ranges 12112 and 12113 represent imaging ranges of imaging portions 12102 and 12103 provided in the side view mirror, respectively, and the imaging range 12114 represents an imaging range of an imaging portion 12104 provided in the rear bumper or the rear door. For example, the image data imaged by the imaging sections 12101 to 12104 are superimposed, whereby a bird's-eye view image of the vehicle 12100 as seen from above can be obtained.
At least one of the imaging sections 12101 to 12104 may have a function of acquiring distance information. For example, at least one of the imaging sections 12101 to 12104 may be a stereo camera including a plurality of imaging elements, or may be an imaging element having pixels for detecting a phase difference.
For example, based on the distance information obtained from the imaging units 12101 to 12104, the
For example, the
At least one of the imaging sections 12101 to 12104 may be an infrared camera for detecting infrared light. For example, the
As described above, an example of a vehicle control system to which the technology according to the present disclosure can be applied has been explained. In the above configuration, the technique according to the present disclosure may be applied to the
In addition, the embodiments of the present technology are not limited to the above-described embodiments, and various changes may be made within a range not departing from the gist of the present technology.
In addition, the effects described herein are merely examples, and are not limited, and other effects may be provided.
The present technology can also have the following configuration.
(A1)
A light receiving element comprising:
an on-chip lens;
a wiring layer; and
a semiconductor layer disposed between the on-chip lens and the wiring layer,
wherein the semiconductor layer comprises
A first voltage applying part to which a first voltage is applied,
a second voltage applying section to which a second voltage different from the first voltage is applied,
a first charge detecting section arranged around the first voltage applying section, and
a second charge detecting section disposed around the second voltage applying section,
the wiring layer includes
At least one layer having a first voltage-applying wiring configured to supply a first voltage, a second voltage-applying wiring configured to supply a second voltage, and a reflective member, an
The reflecting member is provided so as to overlap with the first charge detecting portion or the second charge detecting portion in a plan view.
(A2)
The light-receiving element according to (a1), wherein the first voltage-applying portion, the second voltage-applying portion, the first charge-detecting portion, and the second charge-detecting portion are in contact with the wiring layer.
(A3)
The light-receiving element according to (a1) or (a2), wherein the one layer having the first voltage-application wiring, the second voltage-application wiring, and the reflective member includes a layer closest to the semiconductor layer.
(A4)
The light-receiving element according to any one of (A1) to (A3), wherein the first voltage application part or the second voltage application part comprises
A first region containing an acceptor element at a first impurity concentration on the wiring layer side, and
a second region containing an acceptor element at a second impurity concentration lower than the first impurity concentration on the on-chip lens side.
(A5)
The light-receiving element according to any one of (A1) to (A4), wherein the first charge detecting section or the second charge detecting section includes
A third region containing a donor element of a third impurity concentration on the wiring layer side, and
a fourth region containing a donor element of a second impurity concentration lower than the third impurity concentration on the on-chip lens side.
(A6)
The light-receiving element according to any one of (A1) to (A5), wherein the reflective member comprises a metal film.
(A7)
The light-receiving element according to any one of (a1) to (a6), wherein the reflective member is symmetrically arranged in a region on the first charge detection portion side and a region on the second charge detection portion side.
(A8)
The light-receiving element according to any one of (A1) to (A7), wherein the reflective members are arranged in a lattice pattern.
(A9)
The light-receiving element according to any one of (A1) to (A7), wherein the reflective member is arranged in a stripe-like pattern.
(A10)
The light-receiving element according to any one of (A1) to (A7), wherein the reflective member is arranged only in a pixel center region.
(A11)
The light-receiving element according to any one of (A1) to (A7), wherein the wiring layer further includes a wiring capacitance on the same layer as the reflective member.
(A12)
The light-receiving element according to any one of (A1) to (A11), wherein the wiring layer further includes a wiring capacitance on a layer different from the reflective member.
(A13)
The light-receiving element according to any one of (A1) to (A12), wherein the wiring layer further includes a voltage supply line configured to supply the first voltage or the second voltage to the first voltage application wiring and the second voltage application wiring.
(A14)
The light-receiving element according to (a13), wherein the voltage supply lines are arranged in a mirror image arrangement in which connection destinations of two pixels adjacent to each other above and below are mirror-inverted.
(A15)
The light-receiving element according to (a13), wherein the voltage supply lines are arranged in a periodic arrangement that periodically repeats with respect to pixels arranged in a vertical direction.
(A16)
The light-receiving element according to any one of (A13) to (A15), wherein two voltage supply lines are arranged with respect to two columns of pixels.
(A17)
The light-receiving element according to any one of (A13) to (A15), wherein four voltage supply lines are arranged with respect to two columns of pixels.
(A18)
The light-receiving element according to any one of (A1) to (A17), wherein the wiring layer further includes
A first pixel transistor configured to drive the first charge detection section, an
A second pixel transistor configured to drive the second charge detection section, an
The first pixel transistor and the second pixel transistor are symmetrically arranged.
(B1)
An imaging element comprising:
a pixel array section including a plurality of pixels configured to perform photoelectric conversion on incident light,
wherein the pixel comprises
A substrate configured to perform photoelectric conversion on incident light, and
a signal extraction portion including a voltage application portion for generating an electric field by applying a voltage to the substrate and a charge detection portion for detecting a signal carrier generated by photoelectric conversion, the signal extraction portion being provided on a surface of the substrate on a side opposite to an incident surface on which light is incident, within the substrate.
(B2)
The imaging element according to (B1), wherein two signal extraction sections are formed in the pixel.
(B3)
The imaging element according to (B1), wherein one signal extraction section is formed in the pixel.
(B4)
The imaging element according to (B1), wherein three or more signal extraction sections are formed in the pixel.
(B5)
The imaging element according to (B1), wherein the signal extraction section is shared between the pixel and another pixel adjacent to the pixel.
(B6)
The imaging element according to (B1), wherein the voltage applying section is shared between the pixel and another pixel adjacent to the pixel.
(B7)
The imaging element according to any one of (B1) to (B6), wherein the signal extraction section includes a P-type semiconductor region as the voltage application section and an N-type semiconductor region as the charge detection section, the N-type semiconductor region being formed so as to surround the P-type semiconductor region.
(B8)
The imaging element according to any one of (B1) to (B6), wherein the signal extraction section includes an N-type semiconductor region as the charge detection section and a P-type semiconductor region as the voltage application section, the P-type semiconductor region being formed so as to surround the N-type semiconductor region.
(B9)
The imaging element according to any one of (B1) to (B6), wherein the signal extraction section includes first and second N-type semiconductor regions as the charge detection section, and a P-type semiconductor region as the voltage application section, the P-type semiconductor region being formed at a position sandwiched between the first and second N-type semiconductor regions.
(B10)
The imaging element according to any one of (B1) to (B6), wherein the signal extraction section includes first and second P-type semiconductor regions as the voltage application section, and an N-type semiconductor region as the charge detection section, the N-type semiconductor region being formed at a position sandwiched between the first and second P-type semiconductor regions.
(B11)
The imaging element according to any one of (B1) to (B10), wherein a voltage is applied to the incident surface side in the substrate.
(B12)
The imaging element according to any one of (B1) to (B11), wherein the pixel further includes a reflecting member configured to reflect light incident on the substrate from the incident surface, the reflecting member being formed on a surface of the substrate on a side opposite to the incident surface.
(B13)
The imaging element according to any one of (B1) to (B12), wherein the signal carrier includes electrons.
(B14)
The imaging element according to any one of (B1) to (B12), wherein the signal carrier includes a hole.
(B15)
The imaging element according to any one of (B1) to (B14), wherein the pixel further comprises a lens configured to condense light and make the light incident on the substrate.
(B16)
The imaging element according to any one of (B1) to (B15), wherein the pixel further includes an inter-pixel light-shielding portion configured to shield incident light, the inter-pixel light-shielding portion being formed at a pixel end portion on the incident surface of the substrate.
(B17)
The imaging element according to any one of (B1) to (B16), wherein the pixel further includes a pixel separation region configured to penetrate at least a part of the substrate and block incident light, the pixel separation region being formed at a pixel end portion within the substrate.
(B18)
The imaging element according to any one of (B1) to (B17), wherein the substrate comprises a P-type semiconductor substrate having a resistance of 500[ Ω cm ] or more.
(B19)
The imaging element according to any one of (B1) to (B17), wherein the substrate comprises an N-type semiconductor substrate having an electric resistance of 500[ Ω cm ] or more.
(B20)
An image forming apparatus comprising:
a pixel array section including a plurality of pixels configured to perform photoelectric conversion on incident light; and
a signal processing section configured to calculate distance information to an object based on a signal output from the pixel,
wherein the pixel comprises
A substrate configured to perform photoelectric conversion on incident light, and
a signal extraction portion including a voltage application portion for generating an electric field by applying a voltage to the substrate and a charge detection portion for detecting a signal carrier generated by photoelectric conversion, the signal extraction portion being provided on a surface of the substrate on a side opposite to an incident surface on which light is incident, within the substrate.
List of reference numerals
11 solid-state imaging element
21 pixel array section
22 vertical drive part
51 pixel
61 substrate
62 on-chip lens
71-1, 71-2, 71N + semiconductor region
73-1, 73-2, 73P + semiconductor region
441-1, 441-2, 441 separation region
471-1, 471-2, 471 separation area
631 reflecting member
721 transmission transistor
722 FD
723 reset transistor
724 amplifying transistor
725 selection transistor
727 additional capacitor
728 switching transistor
741 voltage supply line
811 multilayer wiring layer
812 interlayer insulating film
813 power cord
814 voltage application wiring
815 reflecting member
816 voltage application wiring
817 control line
M1-M5 metal film