Signal processing device, signal processing method, and distance measuring module

文档序号:1836136 发布日期:2021-11-12 浏览:17次 中文

阅读说明:本技术 信号处理装置、信号处理方法和距离测量模块 (Signal processing device, signal processing method, and distance measuring module ) 是由 海津俊 三原基 神谷拓郎 青竹峻太郎 于 2020-03-19 设计创作,主要内容包括:本技术涉及能够提高距离测量精度的信号处理装置、信号处理方法和距离测量模块。该信号处理装置设置有估计单元,该估计单元在具有用于检测由光电转换单元进行光电转换而得到的电荷的第一抽头和用于检测由光电转换单元进行光电转换而得到的电荷的第二抽头的像素中,采用第一检测信号至第四检测信号来估计第一抽头与第二抽头之间的抽头间灵敏度差异,上述第一检测信号至第四检测信号是通过相对于发射光以第一相位至第四相位检测因为发射光从对象反射而产生的反射光来获得的。本技术可以应用于使用间接飞行时间(ToF)方法执行距离测量的距离测量模块等。(The present technology relates to a signal processing device, a signal processing method, and a distance measuring module capable of improving distance measurement accuracy. The signal processing apparatus is provided with an estimation unit that estimates an inter-tap sensitivity difference between a first tap and a second tap by using first to fourth detection signals obtained by detecting reflected light due to reflection of the emitted light from an object at first to fourth phases with respect to the emitted light, in a pixel having the first tap for detecting electric charge photoelectrically converted by a photoelectric conversion unit and the second tap for detecting electric charge photoelectrically converted by the photoelectric conversion unit. The present technology can be applied to a distance measurement module or the like that performs distance measurement using an indirect time-of-flight (ToF) method.)

1. A signal processing apparatus comprising:

an estimating unit that estimates, in a pixel including a first tap that detects electric charges photoelectrically converted by a photoelectric converting unit and a second tap that detects electric charges photoelectrically converted by the photoelectric converting unit, a sensitivity difference between taps of the first and second taps by using first to fourth detection signals obtained by detecting reflected light generated by reflection of the emitted light by an object with respect to the emitted light at first to fourth phases.

2. The signal processing apparatus according to claim 1,

the estimation unit calculates an offset and a gain of the second tap with respect to the first tap as the sensitivity difference between taps.

3. The signal processing apparatus according to claim 2,

the estimation unit calculates an offset and a gain of the second tap with respect to the first tap under a condition that phases of the first tap and the second tap are out of phase by 180 degrees.

4. The signal processing apparatus of claim 2, further comprising:

an amplitude estimation unit that estimates amplitudes of the first to fourth detection signals,

wherein the estimation unit updates the offset and the gain by mixing the calculated offset and gain with a current offset and gain based on the estimated magnitude.

5. The signal processing apparatus of claim 4, further comprising:

a motion amount estimation unit that estimates a motion amount of the object in the pixel,

wherein the estimation unit updates the offset and the gain by mixing the calculated offset and gain with a current offset and gain based on the estimated magnitude and the movement amount.

6. The signal processing apparatus of claim 1, further comprising:

a correction processing unit that performs correction processing for correcting the first detection signal and the second detection signal, which are the latest two detection signals among the first detection signal to the fourth detection signal, by using the following parameters: wherein the sensitivity difference is estimated using the parameter.

7. The signal processing apparatus of claim 6, further comprising:

a 2-phase processing unit that calculates an I signal and a Q signal of a 2-phase method by using the first detection signal and the second detection signal after correction processing;

a 4-phase processing unit that calculates an I signal and a Q signal of a 4-phase method by using the first to fourth detection signals;

a mixing processing unit that mixes the I signal and the Q signal of the 2-phase method with the I signal and the Q signal of the 4-phase method, and calculates the mixed I signal and Q signal; and

a calculation unit that calculates distance information to the object based on the mixed I and Q signals.

8. The signal processing apparatus according to claim 7,

the mixing processing unit mixes the I signal and the Q signal of the 2-phase method with the I signal and the Q signal of the 4-phase method based on the magnitudes of the first to fourth detection signals and the amount of movement of the object in the pixel.

9. The signal processing apparatus according to claim 7,

the calculation unit calculates distance information to the object each time detection signals of two phases among the first to fourth detection signals are updated.

10. A signal processing method comprising, by a signal processing apparatus:

in a pixel including a first tap that detects electric charges photoelectrically converted by a photoelectric conversion unit and a second tap that detects electric charges photoelectrically converted by the photoelectric conversion unit, a sensitivity difference between taps of the first and second taps is estimated by using first to fourth detection signals obtained by detecting reflected light generated by reflection of the emitted light by an object with respect to the emitted light in first to fourth phases.

11. A distance measurement module comprising:

a light receiving unit in which pixels are two-dimensionally arranged, each pixel including a first tap that detects electric charges photoelectrically converted by a photoelectric conversion unit and a second tap that detects electric charges photoelectrically converted by the photoelectric conversion unit; and

a signal processing unit including an estimation unit that estimates, in the pixel, a sensitivity difference between taps of the first and second taps by using first to fourth detection signals obtained by detecting reflected light generated by reflection of the emitted light by an object with respect to the emitted light at first to fourth phases.

12. The distance measurement module of claim 11,

each of the pixels receives the reflected light obtained by emitting the emitted light at a plurality of frequencies, and

the estimation unit estimates the sensitivity difference between taps at each of the plurality of frequencies.

13. The distance measurement module of claim 11,

each of the pixels receives the reflected light obtained by emitting the emitted light for a plurality of exposure times, and

the estimation unit estimates the sensitivity difference between taps at each of the plurality of exposure times.

14. The distance measurement module of claim 11,

the light receiving unit is driven to cause a first pixel to receive the reflected light at a first phase and simultaneously cause a second pixel to receive the reflected light at a second phase, next, to cause the first pixel to receive the reflected light at a third phase and simultaneously cause the second pixel to receive the reflected light at a fourth phase, and

the estimation unit estimates a sensitivity difference between taps of the first tap and the second tap by using the first to fourth detection signals detected at the first to fourth phases.

Technical Field

The present technology relates to a signal processing device, a signal processing method, and a distance measurement module, and particularly relates to a signal processing device, a signal processing method, and a distance measurement module capable of improving distance measurement accuracy.

Background

In recent years, distance measurement modules that measure the distance to an object have become smaller and smaller due to advances in semiconductor technology. Therefore, for example, it has been realized to install a distance measuring module in a mobile terminal (e.g., a so-called smartphone), which is a small information processing apparatus equipped with a communication function.

Examples of the distance measurement method in the distance measurement module include an indirect time-of-flight (ToF) method, a structured light method, and the like. The indirect ToF method comprises: the method includes emitting light toward an object, detecting light reflected from a surface of the object, and calculating a distance to the object based on a measurement value obtained by measuring a time of flight of the light. The structured light method comprises: pattern light is emitted toward the object, and the distance to the object is calculated based on an image obtained by capturing distortion of the pattern on the surface of the object.

For example, patent document 1 discloses a technique of accurately measuring a distance by determining a movement of an object within a detection period by a distance measurement module for distance measurement by an indirect ToF method.

Reference list

Patent document

Patent document 1: japanese patent application laid-open No. 2017-150893

Disclosure of Invention

Problems to be solved by the invention

In the distance measurement module of the indirect ToF method, it is necessary to further improve the distance measurement accuracy.

The present disclosure is made in view of such a situation, and makes it possible to improve the distance measurement accuracy.

Solution to the problem

The signal processing device according to the first aspect of the present technology includes an estimation unit that estimates a sensitivity difference between taps of the first tap and the second tap by using first to fourth detection signals obtained by detecting reflected light generated by reflecting the emitted light by the subject in first to fourth phases with respect to the emitted light, in a pixel including the first tap (tap) that detects electric charge photoelectrically converted by the photoelectric conversion unit and the second tap that detects electric charge photoelectrically converted by the photoelectric conversion unit.

A signal processing method according to a second aspect of the present technology includes the following operations by a signal processing apparatus: in a pixel including a first tap that detects electric charges photoelectrically converted by a photoelectric conversion unit and a second tap that detects electric charges photoelectrically converted by the photoelectric conversion unit, a sensitivity difference between the taps of the first and second taps is estimated by using first to fourth detection signals obtained by detecting reflected light generated by reflecting the emitted light by an object with respect to the emitted light at first to fourth phases.

A distance measuring module according to a third aspect of the present technology includes: a light receiving unit in which pixels are two-dimensionally arranged, each pixel including a first tap that detects electric charges photoelectrically converted by the photoelectric conversion unit and a second tap that detects electric charges photoelectrically converted by the photoelectric conversion unit; and a signal processing unit including an estimating unit that estimates, in the pixel, a sensitivity difference between taps of the first and second taps by using first to fourth detection signals obtained by detecting reflected light generated by reflecting the emitted light by the subject at first to fourth phases with respect to the emitted light.

According to the first to third aspects of the present technology, in a pixel including a first tap that detects electric charges photoelectrically converted by a photoelectric conversion unit and a second tap that detects electric charges photoelectrically converted by the photoelectric conversion unit, a sensitivity difference between the taps of the first and second taps is estimated by using first to fourth detection signals obtained by detecting reflected light generated by reflecting the emitted light by a subject in first to fourth phases with respect to the emitted light.

The signal processing device and the distance measuring module may be separate devices or modules incorporated in another device.

Drawings

Fig. 1 is a block diagram showing a configuration example of one embodiment of a distance measurement module to which the present technology is applied.

Fig. 2 is a diagram for describing the operation of a pixel in the indirect ToF method.

Fig. 3 is a diagram for describing a detection method by 4 phases.

Fig. 4 is a diagram for describing a detection method by 4 phases.

Fig. 5 is a diagram for describing a method of calculating depth values by the 2-phase method and the 4-phase method.

Fig. 6 is a diagram for describing the driving of the light receiving unit of the distance measuring module and the output timing of the depth map.

Fig. 7 is a block diagram showing a detailed configuration of the signal processing unit.

Fig. 8 is a diagram for describing detection signals of four phases.

Fig. 9 is a diagram for describing a mixing ratio based on the shift amount and the amplitude.

Fig. 10 is a block diagram showing a detailed configuration example of the fixed pattern estimation unit.

Fig. 11 is a diagram for describing a mixing ratio updated based on coefficients of shift amounts and amplitudes.

Fig. 12 is a flowchart for describing a depth value calculation process performed by the signal processing unit on a pixel to be processed.

Fig. 13 is a diagram for describing a first modification and a second modification of driving.

Fig. 14 is a diagram for describing a first modification and a second modification of driving.

Fig. 15 is a diagram for describing a third modification of driving.

Fig. 16 is a block diagram showing a configuration example of an electronic apparatus to which the present technology is applied.

Fig. 17 is a block diagram showing a configuration example of one embodiment of a computer to which the present technology is applied.

Fig. 18 is a block diagram showing one example of the schematic configuration of the vehicle control system.

Fig. 19 is an explanatory diagram showing one example of the mounting positions of the vehicle exterior information detection unit and the image capturing unit.

Detailed Description

Modes for carrying out the present technology (hereinafter referred to as "embodiments") will be described below. Note that description will be made in the following order.

1. Configuration example of distance measuring Module

2. Pixel operation of indirect ToF method

3. Output timing of depth map

4. Detailed configuration example of signal processing unit

5. Depth value calculation processing of signal processing unit

6. Modification of driving by distance measuring module

7. Configuration example of electronic apparatus

8. Configuration example of computer

9. Examples of application to moving objects

<1. configuration example of distance measuring Module >

Fig. 1 is a block diagram showing a configuration example of one embodiment of a distance measurement module to which the present technology is applied.

The distance measuring module 11 shown in fig. 1 is a distance measuring module that measures a distance by an indirect ToF method, and includes a light emitting unit 12, a light emission control unit 13, a light receiving unit 14, and a signal processing unit 15. The distance measurement module 11 emits light to the subject and receives light (reflected light) obtained by the subject reflecting light (emitted light), thereby measuring a depth value as distance information to the subject and outputting a depth map.

The light emitting unit 12 includes, for example, an infrared laser diode or the like as a light source, emits light while performing modulation at a timing according to a light emission control signal supplied from the light emission control unit 13 according to the control of the light emission control unit 13, and emits the emitted light to a subject.

The light emission control unit 13 supplies a light emission control signal having a predetermined frequency (e.g., 20MHz, etc.) to the light emitting unit 12, thereby controlling light emission of the light emitting unit 12. Further, in order to drive the light receiving unit 14 according to the timing of light emission in the light emitting unit 12, the light emission control unit 13 also supplies a light emission control signal to the light receiving unit 14.

In the light receiving unit 14, a pixel array unit 22 is provided, and a drive control circuit 23 is provided in a peripheral area of the pixel array unit 22, and in the pixel array unit 22, pixels 21, each of which generates electric charges according to the amount of received light and outputs a signal according to the electric charges, are two-dimensionally arranged in a matrix in a row direction and a column direction.

The light receiving unit 14 receives reflected light from the subject by using the pixel array unit 22 in which a plurality of pixels 21 are two-dimensionally arranged. Then, the light receiving unit 14 supplies pixel data including a detection signal according to the received light amount of the reflected light received by each pixel 21 of the pixel array unit 22 to the signal processing unit 15.

The signal processing unit 15 calculates a depth value as a distance from the distance measurement module 11 to the object for each pixel 21 of the pixel array unit 22 based on the pixel data supplied from the light receiving unit 14, and outputs the depth value to a subsequent control unit (e.g., the application processing unit 121, the operating system processing unit 122, and the like of fig. 16). Alternatively, the signal processing unit 15 may generate a depth map in which a depth value is stored as a pixel value of each pixel 21 of the pixel array unit 22, and output the depth map to a subsequent stage. Note that the detailed configuration of the signal processing unit 15 will be described later with reference to fig. 7.

The drive control circuit 23 outputs control signals (for example, an assignment signal dim, a selection signal ADDRESS DECODE, a reset signal RST, and the like to be described later) for controlling the driving of the pixels 21, for example, based on a light emission control signal and the like supplied from the light emission control unit 13.

The pixel 21 includes a photodiode 31, and a first tap 32A and a second tap 32B that detect electric charges photoelectrically converted by the photodiode 31. In the pixel 21, the electric charge generated by one photodiode 31 is distributed to the first tap 32A or the second tap 32B. Then, of the electric charges generated by the photodiode 31, the electric charge assigned to the first tap 32A is output as a detection signal a from the signal line 33A, and the electric charge assigned to the second tap 32B is output as a detection signal B from the signal line 33B.

The first tap 32A includes a transfer transistor (transfer transistor)41A, a Floating Diffusion (FD) unit 42A, a selection transistor 43A, and a reset transistor 44A. Similarly, the second tap 32B includes a pass transistor 41B, FD cell 42B, a select transistor 43B, and a reset transistor 44B.

<2. Pixel operation of Indirect ToF method >

The operation of the pixel 21 of the indirect ToF method will be described with reference to fig. 2.

As shown in fig. 2, emitted light modulated to repeat emission on/off at an emission time T (1 period 2T) is output from the light emitting unit 12, and the photodiode 31 receives reflected light at a delay time Δ T according to a distance to an object. Further, the distribution signal DIMIX _ a controls on/off of the transfer transistor 41A, and the distribution signal DIMIX _ B controls on/off of the transfer transistor 41B. The division signal dim _ a is a signal having the same phase as the emitted light, and the division signal dim _ B has a phase obtained by inverting the division signal dim _ a.

Accordingly, the electric charges generated when the photodiode 31 receives the reflected light are transferred to the FD unit 42A when the transfer transistor 41A is turned on in response to the distribution signal DIMIX _ a, and transferred to the FD unit 42B when the transfer transistor 41B is turned on in response to the distribution signal DIMIX _ B. With this configuration, in a predetermined period of time in which emission of emitted light of the emission time T is periodically performed, the electric charges transferred via the transfer transistor 41A are sequentially accumulated in the FD unit 42A, and the electric charges transferred via the transfer transistor 41B are sequentially accumulated in the FD unit 42B.

Then, after the period of accumulating electric charges ends, when the selection transistor 43A is turned on in response to the selection signal ADDRESS DECODE _ a, the electric charges accumulated in the FD unit 42A are read via the signal line 33A, and the detection signal a according to the amount of electric charges is output from the light receiving unit 14. Similarly, when the selection transistor 43B is turned on in response to the selection signal ADDRESS decoder _ B, the electric charge accumulated in the FD unit 42B is read via the signal line 33B, and the detection signal B according to the amount of electric charge is output from the light receiving unit 14. Further, when the reset transistor 44A is turned on in response to the reset signal RST _ a, the electric charges accumulated in the FD unit 42A are released, and when the reset transistor 44B is turned on in response to the reset signal RST _ B, the electric charges accumulated in the FD unit 42B are released.

In this way, the pixel 21 distributes the electric charge generated by the reflected light received by the photodiode 31 to the first tap 32A or the second tap 32B according to the delay time Δ T, and outputs the detection signal a and the detection signal B. Then, the delay time Δ T corresponds to a time during which the light emitted by the light emitting unit 12 flies to the object, is reflected by the object, and flies to the light receiving unit 14 again, that is, corresponds to a distance to the object. Therefore, the distance measurement module 11 can obtain the distance to the subject (depth value) from the delay time Δ T based on the detection signal a and the detection signal B.

However, in the pixel array unit 22, due to a difference in characteristics (sensitivity difference) of each element of the pixel transistors of the respective pixels 21 including the photodiode 31, the transfer transistor 41, and the like, influences different from pixel 21 may be exerted on the detection signal a and the detection signal B. Therefore, the distance measuring module 11 of the indirect ToF method adopts the following method: by acquiring a detection signal a and a detection signal B obtained by receiving reflected light via the same pixel 21 with changing the phase, the sensitivity difference between taps that are fixed pattern noise of each pixel is removed and the signal-to-noise ratio is improved.

As a method of receiving reflected light by changing the phase and calculating the depth value, for example, a detection method by 2 phases (2-phase method) and a detection method by 4 phases (4-phase method) will be described.

As shown in fig. 3, the light receiving unit 14 receives the reflected light at reception timings having phase shifts of 0 °, 90 °, 180 °, and 270 ° with respect to the transmission timing of the emitted light. More specifically, the light receiving unit 14 receives the reflected light by changing the phase in a time-division manner, for example, receives light in a phase set to 0 ° with respect to the emission timing of the emitted light in one frame period, receives light in a phase set to 90 ° in the next frame period, receives light in a phase set to 180 ° in the next frame period, and receives light in a phase set to 270 ° in the next frame period.

Fig. 4 is a diagram showing the light reception period (exposure period) of the first tap 32A of the pixel 21 in each of the phases of 0 °, 90 °, 180 °, and 270 ° in order to make the phase difference easy to understand.

As shown in fig. 4, in the first tap 32A, a detection signal a obtained by receiving light in the same phase (phase 0 °) as the emitted light is referred to as a detection signal a0, a detection signal a obtained by receiving light in a phase (phase 90 °) shifted by 90 degrees with respect to the emitted light is referred to as a detection signal a1, a detection signal a obtained by receiving light in a phase (phase 180 °) shifted by 180 degrees with respect to the emitted light is referred to as a detection signal a2, and a detection signal a obtained by receiving light in a phase (phase 270 °) shifted by 270 degrees with respect to the emitted light is referred to as a detection signal A3.

Further, although illustration is omitted, in the second tap 32B, a detection signal B obtained by receiving light in the same phase (phase 0 °) as that of the emitted light is referred to as a detection signal B0, a detection signal B obtained by receiving light in a phase (phase 90 °) shifted by 90 degrees with respect to the emitted light is referred to as a detection signal B1, a detection signal B obtained by receiving light in a phase (phase 180 °) shifted by 180 degrees with respect to the emitted light is referred to as a detection signal B2, and a detection signal B obtained by receiving light in a phase (phase 270 °) shifted by 270 degrees with respect to the emitted light is referred to as a detection signal B3.

Fig. 5 is a diagram for describing a method of calculating the depth value d by the 2-phase method and the 4-phase method.

In the indirect ToF method, the depth value d can be obtained by the following formula (1).

[ mathematical formula 1]

In equation (1), c is the speed of light, Δ T is the delay time, and f is the modulation frequency of the light. Further, φin the formula (1) represents a phase shift amount [ rad ] of reflected light, and is represented by the following formula (2).

[ mathematical formula 2]

By the 4-phase method, by using the detection signals a0 to A3 and the detection signals B0 to B3 obtained by setting the phases to 0 °, 90 °, 180 °, and 270 °, I and Q of expression (2) are calculated by the following expression (3). I and Q are signals obtained by assuming that the luminance variation of the emitted light is a cos wave and converting the phase of the cos wave from a polar coordinate to a rectangular coordinate system (IQ plane).

I=c0-c180=(A0-B0)-(A2-B2)

Q=c90-c270=(A1-B1)-(A3-B3)··········(3)

By the 4-phase method, for example, "a 0-a 2" and "a 1-A3" in the formula (3), it is possible to eliminate characteristic variations between taps existing in each pixel, that is, fixed pattern noise, by taking the difference between detection signals of opposite phases in the same pixel.

Meanwhile, with the 2-phase method, the depth value d to the subject can be obtained by using only two phases orthogonal to each other among the detection signals a0 to A3 and the detection signals B0 to B3 obtained by setting the phases to 0 °, 90 °, 180 °, and 270 °. For example, in the case of using the detection signals a0 and B0 at the phase 0 ° and the detection signals a1 and B1 at the phase 90 °, I and Q of the expression (2) are the following expression (4).

I=c0-c180=(A0-B0)

Q=c90-c270=(A1-B1)··········(4)

For example, in the case of using the detection signals a2 and B2 at a phase of 180 ° and the detection signals A3 and B3 at a phase of 270 °, I and Q of expression (2) are expression (5) below.

I=c0-c180=-(A2-B2)

Q=c90-c270=-(A3-B3)··········(5)

The 2-phase method cannot eliminate the characteristic variation between taps existing in each pixel, but the depth value d to the subject can be obtained from only the detection signals of the two phases, and thus the distance can be measured at a frame rate equal to twice the frame rate of the 4-phase method.

The signal processing unit 15 of the distance measurement module 11 performs signal processing to appropriately select or mix the I signal and the Q signal corresponding to the delay time Δ T calculated by the 4-phase method and the I signal and the Q signal corresponding to the delay time Δ T calculated by the 2-phase method according to the movement of the object or the like, calculates the depth value d by using the result, and outputs the depth map.

<3. output timing of depth map >

Next, the output timing of the depth map generated by the distance measurement module 11 will be described.

Fig. 6 is a diagram showing the driving of the light receiving unit 14 of the distance measuring module 11 and the output timing of the depth map.

As described above, the light receiving unit 14 of the distance measurement module 11 is driven to receive the reflected light by changing the phase in the order of phase 0 °, phase 90 °, phase 180 °, and phase 270 ° in a time-division manner, and the light receiving unit 14 is continuously driven with a set of two phases for calculating the depth value d by the 2-phase method.

That is, as shown in fig. 6, the light receiving unit 14 receives light continuously at the phase 0 ° and the phase 90 ° from time t 1. After the standby period from time t2 to time t3, the light receiving unit 14 receives light continuously at the phase 180 ° and the phase 270 ° from time t 3. Next, after a predetermined standby period from time t4 to time t5, the light receiving unit 14 receives light continuously at the phase 180 ° and the phase 270 ° from time t 5. After the standby period from time t6 to time t7, the light receiving unit 14 receives light continuously at the phase 0 ° and the phase 90 ° from time t 7.

The light receiving operation of each phase includes: a reset operation of turning on the reset transistor 44A and the reset transistor 44B to reset the charges, an integration operation of accumulating charges in the FD unit 42A and the FD unit 42B, and a readout operation of reading the charges accumulated in the FD unit 42A and the FD unit 42B.

The signal processing unit 15 calculates depth values by using pixel data of four phases, and outputs a depth map in units of two phases.

Specifically, the signal processing unit 15 generates and outputs the Depth map Depth #1 at time t4 by using the pixel data of the four phases from time t1 to time t 4. At the next time t6, the signal processing unit 15 generates and outputs the Depth map Depth #2 by using the pixel data of the four phases from the time t3 to the time t 6. At the next time t8, the signal processing unit 15 generates and outputs the Depth map Depth #3 by using the pixel data of the four phases from the time t5 to the time t 8.

By driving in this way continuously with two phases as a set, in the case where the object is moving, the influence of the movement of the object can be suppressed in the depth value calculated by the 2-phase method.

<4. detailed configuration example of Signal processing Unit >

Fig. 7 is a block diagram showing a detailed configuration of the signal processing unit 15.

The signal processing unit 15 includes a correction processing unit 61, a 2-phase processing unit 62, a 4-phase processing unit 63, a motion estimation unit 64, an amplitude estimation unit 65, a fixed pattern estimation unit 66, a mixing processing unit 67, and a phase calculation unit 68.

The detection signals a0 to A3 and the detection signals B0 to B3 from each pixel of the pixel array unit 22 of the light receiving unit 14 are sequentially supplied to the signal processing unit 15. The detection signals a0 to A3 are detection signals a obtained by sequentially setting the phases to 0 °, 90 °, 180 °, and 270 ° in the first tap 32A. The detection signals B0 through B3 are detection signals B obtained by sequentially setting the phases to 0 °, 90 °, 180 °, and 270 ° in the second tap 32B.

As described with reference to fig. 6, the signal processing unit 15 generates and outputs a depth map by using the latest detection signals a0 to A3 and B0 to B3 of the phase 0 °, the phase 90 °, the phase 180 °, and the phase 270 ° for each pixel. The combination of the detection signals a0 to A3 and the detection signals B0 to B3 includes a case where the detection signals of the two phases of the phase 180 ° and the phase 270 ° are the latest detection signals shown in a of fig. 8, and a case where the detection signals of the two phases of the phase 0 ° and the phase 90 ° are the latest detection signals shown in B of fig. 8.

In the following description, for the sake of simple description, each process of the signal processing unit 15 will be described by taking the following case as an example: as for the combination of the detection signals a0 to A3 and the detection signals B0 to B3 supplied from the light receiving unit 14 to the signal processing unit 15, the detection signals of two phases of 180 ° and 270 ° shown in a of fig. 8 are the latest detection signals.

Note that in the case where the detection signals of the two phases of the phase 0 ° and the phase 90 ° are the latest detection signals as shown in B of fig. 8, similar processing can be performed by regarding the detection signals of the phase 180 °, the phase 270 °, the phase 0 °, and the phase 90 ° as the detection signals a0 to A3 and the detection signals B0 to B3 and inverting the signs.

Returning to the description of fig. 7, the signal processing unit 15 sequentially performs similar processing on the detection signals a0 to A3 and the detection signals B0 to B3 of each pixel of the pixel array unit 22 supplied from the light receiving unit 14 for each pixel as a pixel to be processed. Therefore, hereinafter, each process of the signal processing unit 15 will be described as a process of one pixel as a pixel to be processed.

The detection signals a0 to A3 and the detection signals B0 to B3 of the predetermined pixels 21 as pixels to be processed, which are supplied from the light receiving unit 14 to the signal processing unit 15, are supplied to the 4-phase processing unit 63, the movement estimating unit 64, the amplitude estimating unit 65, and the fixed pattern estimating unit 66. Further, the latest detection signals a2, A3, B2, and B3 of the two phases of 180 ° and 270 ° are supplied to the correction processing unit 61.

The correction processing unit 61 performs processing for correcting a characteristic change (sensitivity difference) between the tap of the detection signal a of the first tap 32A and the tap of the detection signal B of the second tap 32B of the pixel to be processed by using the correction parameter supplied from the fixed pattern estimation unit 66.

In the present embodiment, the detection signal B of the second tap 32B of the pixel to be processed matches the detection signal a of the first tap 32A, and the correction processing unit 61 performs the following correction processing on each of the detection signals a (a2, A3) and B (B2, B3) of the phase 180 ° and the phase 270.

A’=A

B’=c0+c1·B··········(6)

Here, c0 and c1 are correction parameters supplied from the fixed pattern estimation unit 66, c0 denotes the offset of the detection signal B with respect to the detection signal a, and c1 denotes the gain of the detection signal B with respect to the detection signal a.

The detection signal a 'and the detection signal B' in the equation (6) represent detection signals after correction processing. Note that, in the correction processing, the detection signal a of the first tap 32A may be matched with the detection signal B of the second tap 32B of the pixel to be processed, and may also be matched with the middle of the detection signal a and the detection signal B.

The correction processing unit 61 supplies the detection signal a2 'and the detection signal B2' of the phase 180 ° and the detection signal A3 'and the detection signal B3' of the phase 270 ° after the correction processing to the 2-phase processing unit 62.

The 2-phase processing unit 62 calculates the I signal and the Q signal of the 2-phase method by equation (5) by using the detection signal a2 'and the detection signal B2' of the phase 180 ° and the detection signal A3 'and the detection signal B3' of the phase 180 ° from the correction processing unit 61.

Note that, hereinafter, in order to distinguish from the I signal and the Q signal of the 4-phase method calculated by the 4-phase processing unit 63, the I signal and the Q signal of the 2-phase method are described as an I2 signal and a Q2 signal.

The 2-phase processing unit 62 supplies the I2 signal and the Q2 signal of the 2-phase method calculated by equation (5) to the mixing processing unit 67.

The 4-phase processing unit 63 calculates the I signal and the Q signal of the 4-phase method by equation (3) by using the detection signals a0 to A3 and the detection signals B0 to B3 of the pixels to be processed, which are supplied from the light receiving unit 14. Hereinafter, in order to distinguish from the I2 signal and the Q2 signal of the 2-phase method, the I signal and the Q signal of the 4-phase method are described as an I4 signal and a Q4 signal.

The 4-phase processing unit 63 supplies the I4 signal and the Q4 signal of the 4-phase method calculated by the equation (3) to the mixing processing unit 67.

The movement estimating unit 64 estimates (calculates) the movement amount diff of the object between the group of the phase 0 ° and the phase 90 ° and the group of the phase 180 ° and the phase 270 ° by using the detection signals a0 to A3 and the detection signals B0 to B3 of the pixels to be processed.

The movement estimation unit 64 may adopt any one of the following diff0 to diff2 as the movement amount diff of the object between the groups.

diff0=|(A0+B0+A1+B1)-(A2+B2+A3+B3)|

diff1=|(A0+B0)-(A2+B2)|+|(A1+B1)-(A3+B3)|·····(7)

diff2=sqrt(|(A0+B0)-(A2+B2)|2+|(A1+B1)-(A3+B3)|2)

diff0 is an expression for calculating the amount of shift from the difference in the sum of the I signal and the Q signal between the groups. diff1 is an equation for calculating the amount of shift from the difference in I signal between groups. diff2 is an expression for calculating the amount of shift from the distance between the groups on the IQ plane. Which of the shift amounts diff0 to diff2 is employed may be fixedly determined or may be selected (switched) by a setting signal or the like.

The movement estimation unit 64 supplies the estimated movement amount diff of the object to the fixed pattern estimation unit 66 and the blend processing unit 67.

The amplitude estimation unit 65 estimates (calculates) the amplitude amp of the detection signal of the pixel to be processed supplied from the light receiving unit 14. The amplitude here denotes the difference in the detection signal between the two phases caused by the modulated emitted light. A large amplitude amp indicates that the emitted light is sufficiently reflected from the object and incident on the pixel to be processed. A small amplitude amp represents large noise.

The amplitude estimation unit 65 may adopt any one of the following amps 0 to amp3 as the amplitude amp of the detection signal.

amp0=|A2-B2)-(A3-B3)|

amp1=sqrt(|(A2-B2)|2+|(A3-B3)|2)

amp2=|(A0-B0)-(A2-B2)|+|(A1-B1)-(A3-B3)|)·····(8)

amp3=sqrt(|(A0-B0)-(A2-B2)|2+|(A1-B1)-(A3-B3)|2)

amp0 and amp1 are equations for calculating the amplitude by using only the latest detection signals of two phases (i.e., phase 180 ° and phase 270 °). The amp2 and amp3 are expressions for calculating the amplitude by using the latest detection signal of four phases (i.e., phase 0 °, phase 90 °, phase 180 °, and phase 270 °).

The amplitude estimation unit 65 supplies the estimated amplitude amp of the detection signal to the fixed pattern estimation unit 66 and the mixing processing unit 67.

The fixed pattern estimation unit 66 estimates (calculates) an offset c0 and a gain c1 as correction parameters for correcting characteristic variations (sensitivity differences) between taps by using the detection signals a0 to A3 and the detection signals B0 to B3 of the pixels to be processed, the movement amount diff of the object supplied from the movement estimation unit 64, and the amplitude amp supplied from the amplitude estimation unit 65.

The phase difference of the light reception periods of the first tap 32A and the second tap 32B of the predetermined pixel 21 as a pixel to be processed is 180 °. Therefore, under ideal conditions, the following relationships are established between the offset c0 and the gain c1, and the detection signals a0 to A3 and the detection signals B0 to B3.

B0=c0+c1·A2

B1=c0+c1·A3

B2=c0+c1·A0·····(9)

B3=c0+c1·A1

When the matrices A, x and y are placed as follows,

[ mathematical formula 3]

Equation (9) can be expressed as y ═ Ax, and thus the matrix x, that is, the offset c0 and the gain c1 can be calculated by the following equation (10) by the least square method.

[ mathematical formula 4]

X=(ATA)-1ATy···(10)

More strictly, the fixed pattern estimation unit 66 maintains the current offset c0 and gain c1 or updates to the newly calculated offset c0 and gain c1 according to the movement amount diff of the object supplied from the movement estimation unit 64 and the amplitude amp supplied from the amplitude estimation unit 65. Details will be described later with reference to fig. 10.

The mixing processing unit 67 mixes the I2 signal and the Q2 signal of the 2-phase method supplied from the 2-phase processing unit 62 and the I4 signal and the Q4 signal of the 4-phase method supplied from the 4-phase processing unit 63 according to the shift amount diff and the amplitude amp, and calculates the mixed I signal and Q signal, and supplies the I signal and Q signal to the phase calculating unit 68.

Specifically, based on the movement amount diff of the object supplied from the movement estimating unit 64, the blend processing unit 67 calculates the blend rate α _ diff by the following expression (11) based on the movement amount diff.

[ math figure 5]

According to equation (11), as shown in a of fig. 9, in the case where the movement amount diff supplied from the movement estimating unit 64 is smaller than the first threshold value dth0, the blend ratio α _ diff is set to 0. In the case where the movement amount diff is equal to or larger than the second threshold value dth1, the mixing ratio α _ diff is set to 1. In the case where the movement amount diff is equal to or larger than the first threshold value dth0 and smaller than the second threshold value dth1, the mixing ratio α _ diff is linearly determined in the range of 0 < α _ diff <1.

Further, based on the amplitude amp supplied from the amplitude estimation unit 65, the mixing processing unit 67 calculates the mixing ratio α _ diff by the following expression (12) based on the amplitude amp.

[ mathematical formula 6]

According to equation (12), as shown in B of fig. 9, in the case where the amplitude amp supplied from the amplitude estimation unit 65 is smaller than the first threshold ath0, the mixing ratio α _ amp is set to 0. In the case where the amplitude amp is equal to or larger than the second threshold ath1, the mixing ratio α _ amp is set to 1. In the case where the amplitude amp is equal to or greater than the first threshold value and less than the second threshold value ath1, the mixing ratio α _ amp is linearly determined in the range of 0 < α _ amp <1.

Then, the mixing processing unit 67 calculates the final mixing ratio α from the mixing ratio α _ diff based on the movement amount diff and the mixing ratio α _ amp based on the amplitude amp by either of the following expression (12A) or (12B).

α=min(α_diff,α_amp)··········(12A)

α=β·α_diff+(1-β)·α_amp··········(12B)

Note that β in the equation (12B) is a mixing coefficient for mixing the mixing ratio α _ diff and the mixing ratio α _ amp, and is set in advance, for example.

The mixing processing unit 67 calculates an I signal and a Q signal obtained by mixing the I2 signal and the Q2 signal of the 2-phase method and the I4 signal and the Q4 signal of the 4-phase method by equation (13) using the calculated final mixing ratio α, and supplies the signals to the phase calculating unit 68.

I=α·I2+(1-α)·I4

Q=α·Q2+(1-α)·I4··········(13)

In the case where the shift amount diff is large, the mixing processing unit 67 performs mixing to increase the ratio of the I2 signal and the Q2 signal of the 2-phase method. In the case where the shift amount diff is small, the mixing processing unit 67 performs mixing to increase the I4 signal and the Q4 signal of the 4-phase method with less noise. Further, in the case where the amplitude amp is small (noisy), the mixing processing unit 67 performs mixing to increase the ratio of the I4 signal and the Q4 signal of the 4-phase method, thereby improving the signal-to-noise ratio.

The phase calculation unit 68 of fig. 7 calculates a depth value d, i.e., distance information to the subject, by the above equations (1) and (2) using the I signal and the Q signal supplied from the blend processing unit 67. As described with reference to fig. 6, each time the detection signal a and the detection signal B of two phases are updated, the phase calculation unit 68 calculates and outputs the depth value d (depth map) by using the latest detection signal a and detection signal B of four phases.

Fig. 10 is a block diagram showing a detailed configuration example of the fixed pattern estimation unit 66.

The fixed pattern estimation unit 66 includes a coefficient calculation unit 81, a coefficient update unit 82, and a coefficient storage unit 83.

The detection signals a0 to A3 and the detection signals B0 to B3 of the pixels to be processed from the light receiving unit 14 are supplied to the coefficient calculating unit 81. The movement amount diff of the object from the movement estimating unit 64 and the amplitude amp of the detection signal from the amplitude estimating unit 65 are supplied to the coefficient updating unit 82.

The coefficient calculation unit 81 calculates the matrix x, i.e., the offset c0 and the gain c1, by the above equation (10). The coefficient calculation unit 81 supplies the calculated offset c0 and gain c1 as a new offset next _ c0 and a new gain next _ c1 to the coefficient update unit 82 as update candidates.

Based on the movement amount diff of the object supplied from the movement estimating unit 64, the coefficient updating unit 82 calculates the mixing rate u _ diff based on the movement amount diff by the following expression (14).

[ math figure 7]

According to equation (14), as shown in a of fig. 11, in the case where the movement amount diff supplied from the movement estimating unit 64 is smaller than the first threshold uth0, the mixing ratio u _ diff is set to 1. In the case where the movement amount diff is equal to or larger than the second threshold uth1, the mixing ratio u _ diff is set to 0. In the case where the shift amount diff is equal to or larger than the first threshold value uth0 and smaller than the second threshold value uth1, the mixing ratio u _ diff is linearly determined in the range of 0 < u _ diff <1.

Further, based on the amplitude amp supplied from the amplitude estimation unit 65, the coefficient update unit 82 calculates the mixing ratio u _ amp by the following expression (15) based on the amplitude amp.

[ mathematical formula 8]

According to equation (15), as shown in B of fig. 11, in the case where the amplitude amp supplied from the amplitude estimation unit 65 is smaller than the first threshold vth0, the mixing ratio u _ amp is set to 0. In the case where the amplitude amp is equal to or larger than the second threshold vth1, the mixing ratio u _ amp is set to 1. In the case where the amplitude amp is equal to or greater than the first threshold value vth0 and less than the second threshold value vth1, the mixing ratio u _ am is linearly determined in the range of 0 < u _ amp <1.

Then, the coefficient updating unit 82 calculates the final mixing ratio u by the following equation (16) from the mixing ratio u _ diff based on the movement amount diff and the mixing ratio u _ amp based on the amplitude amp.

u=min(u_diff,u_amp)··········(16)

Using the calculated final mixing ratio u, the coefficient updating unit 82 mixes the new offset next _ c0 and the new gain next _ c1 from the coefficient calculating unit 81 with the current offset prev _ c0 and the gain prev _ c1 from the coefficient storing unit 83, and calculates an updated offset c0 and gain c1 by the following equation (17).

c0=u·next_c0+(1-u)·prev_c0

c1=u·next_c1+(1-u)·prev_c1··········(17)

The coefficient updating unit 82 supplies the calculated updated offset c0 and gain c1 to the correction processing unit 61 (fig. 7) and stores the offset c0 and gain c1 in the coefficient storage unit 83.

The coefficient storage unit 83 stores the offset c0 and the gain c1 supplied from the coefficient update unit 82. Then, before the next new offset next _ c0 and new gain next _ c1 are updated at the timing at which they are supplied from the coefficient calculation unit 81 to the coefficient update unit 82, the offset c0 and the gain c1 stored in the coefficient storage unit 83 are supplied to the coefficient update unit 82 as the current offset prev _ c0 and gain prev _ c 1.

In the case where the amplitude amp is sufficiently large and the shift amount diff is small, the new offset next _ c0 and the new gain next _ c1 to be calculated by the coefficient calculation unit 81 can be calculated with the highest accuracy. In the case where the amplitude amp is sufficiently large and the shift amount diff is small, the coefficient updating unit 82 performs updating by increasing the mixing rate of the new offset next _ c0 and the new gain next _ c 1. In the case where the amplitude amp is small or the shift amount diff is large, the coefficient update unit 82 increases the mixing rate of the current offset prev _ c0 and gain prev _ c1, sets the mixing rate u to maintain the current offset prev _ c0 and gain prev _ c1, and calculates the updated offset c0 and gain c 1.

Note that, in the above example, the coefficient updating unit 82 updates the offset c0 and the gain c1 by using both the movement amount diff supplied from the movement estimating unit 64 and the amplitude amp supplied from the amplitude estimating unit 65, but it is also possible to update the offset c0 and the gain c1 by using only one of the movement amount diff or the amplitude amp. In this case, the mixing ratio u _ diff based on the movement amount diff or the mixing ratio u _ amp based on the amplitude amp is replaced with the mixing ratio u of equation (17).

<5. depth value calculation processing by Signal processing Unit >

Next, with reference to the flowchart of fig. 12, a depth value calculation process for calculating a depth value of a pixel to be processed by the signal processing unit 15 will be described. For example, the processing is started when the detection signals a0 to A3 and the detection signals B0 to B3 of the predetermined pixels 21 in the pixel array unit 22 as the pixels to be processed are supplied.

First, in step S1, the 4-phase processing unit 63 calculates the I4 signal and the Q4 signal of the 4-phase method by equation (3) by using the detection signals a0 to A3 and the detection signals B0 to B3 of the pixels to be processed, which are supplied from the light receiving unit 14. The calculated 4-phase I4 signal and Q4 signal are supplied to the mixing processing unit 67.

In step S2, the movement estimation unit 64 estimates the movement amount diff of the object between the group of the phase 0 ° and the phase 90 ° and the group of the phase 180 ° and the phase 270 ° by using the detection signals a0 to A3 and the detection signals B0 to B3 of the pixels to be processed. For example, one of diff0 to diff2 of expression (7) is calculated as the movement amount diff. The movement estimation unit 64 supplies the estimated movement amount diff of the object to the fixed pattern estimation unit 66 and the blend processing unit 67.

In step S3, the amplitude estimation unit 65 estimates the amplitude amp of the detection signal of the pixel to be processed by calculating any one of amp0 to amp3 of expression (8). The amplitude estimation unit 65 supplies the estimated amplitude amp of the detection signal to the fixed pattern estimation unit 66 and the mixing processing unit 67.

Steps S1 through S3 may be processed in a different order or may be processed simultaneously.

In step S4, the fixed pattern estimation unit 66 estimates an offset c0 and a gain c1 as correction parameters for correcting characteristic variations between taps by using the detection signals a0 to A3 and B0 to B3 of the pixels to be processed, the movement amount diff of the object supplied from the movement estimation unit 64, and the amplitude amp of the detection signal supplied from the amplitude estimation unit 65. The estimated offset c0 and gain c1 are supplied to the correction processing unit 61.

In step S5, the correction processing unit 61 performs processing for correcting a characteristic variation between the taps of the detection signal a of the first tap 32A and the detection signal B of the second tap 32B of the pixel to be processed, by using the correction parameter supplied from the fixed pattern estimation unit 66. Specifically, the correction processing unit 61 performs processing for matching the detection signal B of the second tap 32B of the pixel to be processed with the detection signal a of the first tap 32A by equation (6) using the offset c0 and the gain c1 as correction parameters supplied from the fixed pattern estimation unit 66. The detection signals a2 'and B2' of the phase 180 ° and the detection signals A3 'and B3' of the phase 270 ° after the correction processing are supplied to the 2-phase processing unit 62.

In step S6, the 2-phase processing unit 62 calculates the I2 signal and the Q2 signal of the 2-phase method by equation (5) by using the detection signals a2 'and B2' of the phase 180 ° and the detection signals A3 'and B3' of the phase 270 ° after the correction processing. The calculated I2 signal and Q2 signal are supplied to the mixing processing unit 67.

In step S7, the mixing processing unit 67 mixes the I2 signal and the Q2 signal of the 2-phase method supplied from the 2-phase processing unit 62 and the I4 signal and the Q4 signal of the 4-phase method supplied from the 4-phase processing unit 63 in accordance with the shift amount diff and the amplitude amp, and calculates the mixed I signal and Q signal and supplies the I signal and Q signal to the phase calculating unit 68.

In step S8, the phase calculation unit 68 uses the I signal and the Q signal supplied from the blend processing unit 67 to calculate a depth value d to the object by the above equations (1) and (2), and outputs the depth value d to the subsequent stage.

The processing of the above-described step S1 to step S8 is sequentially performed with each pixel of the pixel array unit 22 supplied from the light receiving unit 14 as a pixel to be processed.

Since the moving amount diff of the object and the amplitude amp of the detection signal are different for each pixel of the pixel array unit 22, the I2 signal and the Q2 signal of the 2-phase method and the I4 signal and the Q4 signal of the 4-phase method supplied from the 4-phase processing unit 63 are mixed at a different mixing rate α for each pixel, and the I signal and the Q signal are calculated.

By the above depth value calculation processing, in the case where the movement amount diff of the object is large, the depth value d is calculated with priority given to the I2 signal and the Q2 signal of the 2-phase method, and in the case where the movement amount diff is small and the object is stationary, the depth value d is calculated with priority given to the I4 signal and the Q4 signal of the 4-phase method. Further, characteristic variations (sensitivity differences) between taps (i.e., fixed pattern noise) are estimated from the detection signals of the four phases, and corrected by the correction processing unit 61. Therefore, the I2 signal and the Q2 signal of the 2-phase process can be calculated with high accuracy. By this configuration, the signal-to-noise ratio can be improved. That is, the distance measurement accuracy can be improved.

Since the signal processing unit 15 outputs a depth value (depth map) each time the detection signals of two phases are received, a high frame rate can be achieved with a high signal-to-noise ratio.

<6. modification of driving by distance measuring Module >

Referring to fig. 13 to 15, a modification of the driving of the distance measuring module 11 will be described. In addition to the above-described driving, the distance measurement module 11 may selectively perform the driving of the following first to third modifications.

A of fig. 13 shows a first modification of the driving of the distance measuring module 11.

In the above-described embodiment, the light emission control unit 13 supplies the light emission control signal of a single frequency such as 20MHz to the light emitting unit 12, for example, and the light emitting unit 12 emits modulated light of the single frequency to the subject.

In contrast, as shown in a of fig. 13, the light emission control unit 13 causes the light emitting unit 12 to emit the emitted light at a plurality of frequencies, and causes the light receiving unit 14 to receive the light. In a of fig. 13, the frequency of modulated light to be emitted from the light emitting unit 12 is changed between "HIGH frequency (HIGH FREQ.)" and "LOW frequency (LOW FREQ.)". "HIGH freq." is a HIGH frequency, such as 100MHz, etc., and "LOW freq." is a LOW frequency, such as 20MHz, etc.

The light receiving unit 14 sequentially receives the first modulated light emitted at a high frequency and the second modulated light emitted at a low frequency in two phases of phase 0 ° and phase 90 °. Next, the light receiving unit 14 sequentially receives the first modulated light emitted at a high frequency and the second modulated light emitted at a low frequency in two phases of 180 ° and 270 °. The calculation method of the depth value at each frequency is similar to the above-described embodiment.

The distance measuring module 11 causes the light emitting unit 12 to emit light at a plurality of frequencies, and causes the light receiving unit 14 to receive light. The signal processing unit 15 performs the above-described depth value calculation processing. By calculating the depth value using a plurality of frequencies, a phenomenon (depth aliasing) in which a plurality of distances according to the modulation frequency are determined to be the same can be eliminated.

B of fig. 13 shows a second modification of the driving of the distance measuring module 11.

In the above-described embodiment, the light receiving period (exposure period) in which each pixel 21 of the light receiving unit 14 receives modulated light is set to a single time.

In contrast, as shown in B of fig. 13, each pixel 21 can receive modulated light by setting a plurality of light receiving periods (exposure periods). In B of fig. 13, the light reception period changes between "high SENSITIVITY (HIGH SENSITIVITY)" and "LOW SENSITIVITY (LOW SENSITIVITY)". "HIGH SENSITIVITY" is high SENSITIVITY in the case where the light reception period is set to the first light reception period, and "LOW SENSITIVITY" is LOW SENSITIVITY in the case where the light reception period is set to the second light reception period shorter than the first light reception period.

The light receiving unit 14 sequentially receives modulated light emitted at a predetermined frequency in two phases of phase 0 ° and phase 90 ° with high sensitivity and low sensitivity. Next, the light receiving unit 14 sequentially receives the modulated light emitted at a predetermined frequency in two phases of 180 ° and 270 ° in phase with high sensitivity and low sensitivity. The calculation method of the depth value at each frequency is similar to the above-described embodiment.

The distance measuring module 11 causes the light emitting unit 12 to emit light at a predetermined frequency, and causes the light receiving unit 14 to receive light at two sensitivity levels of high sensitivity and low sensitivity. The signal processing unit 15 performs the above-described depth value calculation processing. The high-sensitivity light reception can realize the long-distance measurement, but the low-sensitivity light reception may cause saturation. By calculating the depth value using a plurality of sensitivity levels, the distance measurement range can be expanded.

In the example of B of fig. 13, the high-sensitivity detection and the low-sensitivity detection are performed in the same two phases, but the high-sensitivity detection and the low-sensitivity detection may also be performed in two different phases. Specifically, driving may be performed first to receive light with high sensitivity in two phases of phase 0 ° and phase 90 °, to receive light with low sensitivity in two phases of phase 180 ° and phase 270 °, to receive light with high sensitivity in two phases of phase 180 ° and phase 270 °, and to receive light with low sensitivity in two phases of phase 0 ° and phase 90 °.

In both of the first modification of the driving described in a of fig. 13 and the second modification of the driving described in B of fig. 13, the detection is performed by 4 phases of phase 0 °, phase 90 °, phase 180 °, phase 270 ° at a plurality of frequencies or a plurality of sensitivity levels, but the detection of only two phases may be performed at a plurality of frequencies or a plurality of sensitivity levels.

For example, a of fig. 14 shows an example in which in the driving at a plurality of frequencies shown in a of fig. 13, for the second modulated light emitted at a low frequency, light reception of two phases of 180 ° and 270 ° in phase is omitted, and detection of four phases is performed only at a high frequency.

Further, B of fig. 14 shows an example in which in the driving at a plurality of sensitivity levels shown in B of fig. 13, light reception with two phases of 180 ° and 270 ° of the phase of low sensitivity is omitted, and detection of four phases is performed only with high sensitivity.

In this way, the frame rate can be increased by performing detection in only two phases at multiple frequencies or multiple sensitivity levels.

Fig. 15 shows a third modification of the driving of the distance measuring module 11.

In the above-described embodiment, all the pixels 21 of the pixel array unit 22 of the light receiving unit 14 are driven to perform detection at the same phase of the phase 0 °, the phase 90 °, the phase 180 °, or the phase 270 ° at a predetermined timing.

In contrast, as shown in a of fig. 15, the pixels 21 of the pixel array unit 22 may be classified into pixels 21X and 21Y in a checkerboard pattern, and may be driven to perform detection in different phases between the pixels 21X and 21Y.

For example, as shown in C of fig. 15, the light receiving unit 14 of the distance measuring module 11 may be driven such that in a certain frame period, the pixel 21X of the pixel array unit 22 performs detection at the phase 0 ° while the pixel 21Y performs detection at the phase 90 °, and in the next frame period, the pixel 21X of the pixel array unit 22 performs detection at the phase 180 ° while the pixel 21Y performs detection at the phase 270 °. Then, a depth value is calculated by the above-described depth value calculation process by using the detection signals of the four phases obtained in the two frame periods.

The distance measuring module 11 of fig. 1 may be applied to, for example, an in-vehicle system that is mounted on a vehicle and measures a distance to an object outside the vehicle. Further, for example, the distance measurement module 11 of fig. 1 may be applied to a gesture recognition system or the like that measures a distance to an object including a hand or the like of a user and recognizes a gesture of the user based on the measurement result thereof.

<7. configuration example of electronic apparatus >

The distance measuring module 11 may be mounted on an electronic device such as a smart phone, a tablet terminal, a mobile phone, a personal computer, a game machine, a television receiver, a wearable terminal, a digital camera, a digital video camera, or the like.

Fig. 16 is a block diagram showing a configuration example of a smartphone as an electronic apparatus equipped with a distance measurement module.

As shown in fig. 16, the smartphone 101 includes a distance measurement module 102, an image capture device 103, a display 104, a speaker 105, a microphone 106, a communication module 107, a sensor unit 108, a touch panel 109, and a control unit 110, which are connected via a bus 111. Further, the control unit 110 has functions as an application processing unit 121 and an operating system processing unit 122 by the CPU executing programs.

The distance measurement module 11 of fig. 1 is applied to the distance measurement module 102. For example, the distance measurement module 102 is disposed on the front face of the smartphone 101 and performs distance measurement on the user of the smartphone 101, thereby outputting depth values of surface shapes of the user's face, hand, finger, and the like as distance measurement results.

The image capturing apparatus 103 is disposed on the front side of the smartphone 101 and captures an image of the user of the smartphone 101 as a subject, thereby acquiring the image in which the user is captured. Note that although not shown, a configuration may also be adopted in which the image capturing apparatus 103 is also provided on the back of the smartphone 101.

The display 104 displays an operation screen for performing processing by the application processing unit 121 and the operating system processing unit 122, an image captured by the image capturing apparatus 103, and the like. For example, when talking on the smartphone 101, the speaker 105 and the microphone 106 output the voice of the other party and pick up the voice of the user.

The communication module 107 performs communication via a communication network. The sensor unit 108 senses speed, acceleration, proximity, and the like. The touch panel 109 acquires a touch operation of the user on the operation screen displayed on the display 104.

The application processing unit 121 executes processing for providing various services by the smartphone 101. For example, the application processing unit 121 may perform processing for creating a face virtually reproducing a facial expression of the user by computer graphics based on the depth supplied from the distance measurement module 102 and displaying the face on the display 104. Further, for example, the application processing unit 121 may perform processing for creating three-dimensional shape data of an arbitrary three-dimensional object based on the depth supplied from the distance measurement module 102.

The operating system processing unit 122 executes processing for realizing the basic functions and operations of the smartphone 101. For example, operating system processing unit 122 may perform processing for authenticating the user's face and unlocking smartphone 101 based on the depth values provided from distance measurement module 102. Further, the operating system processing unit 122 may perform, for example, processing for recognizing a gesture of the user based on the depth value supplied from the distance measurement module 102 and inputting various operations according to the gesture.

The smartphone 101 configured in this manner can achieve, for example, an increase in frame rate, a reduction in power consumption, and a reduction in data transmission bandwidth by employing the distance measurement module 11 described above. With this configuration, the smartphone 101 can create a more smoothly moving face by computer graphics, perform face authentication with high accuracy, reduce battery consumption, and transmit data in a narrow band.

<8. configuration example of computer >

Next, the series of processes described above may be executed by hardware or may be executed by software. In the case where the series of processes is executed by software, a program constituting the software is installed in a general-purpose computer or the like.

Fig. 17 is a block diagram showing a configuration example of one embodiment of a computer in which a program that executes the series of processes described above is installed.

In the computer, a Central Processing Unit (CPU)201, a Read Only Memory (ROM)202, a Random Access Memory (RAM)203, and an Electrically Erasable Programmable Read Only Memory (EEPROM)204 are interconnected by a bus 205. The bus 205 is also connected to an input-output interface 206, and the input-output interface 206 is connected to the outside.

In the computer configured as described above, the CPU 201 loads a program stored in the ROM 202 and the EEPROM 204 into the RAM 203 via the bus 205, for example, and executes the program, thereby executing the series of processes described above. Further, a program to be executed by the computer (CPU 201) may be externally installed or updated in the EEPROM 204 via the input/output interface 206 in addition to being written in advance in the ROM 202.

With this configuration, the CPU 201 executes the processing according to the above-described flowchart or the processing to be executed according to the configuration of the above-described block diagram. Then, the CPU 201 can output its processing result to the outside via, for example, the input-output interface 206 as necessary.

In this specification, the processes performed by the computer according to the program do not necessarily have to be performed chronologically in the order described in the flowcharts. That is, the processing performed by the computer according to the program also includes processing executed in parallel or individually (for example, parallel processing or processing by an object).

Further, the program may be processed by one computer (processor), or may be processed in a distributed manner by a plurality of computers. Further, the program may be transmitted to a remote computer and executed.

<9. example applied to moving object >

The technique according to the present disclosure (present technique) can be applied to various products. For example, techniques according to the present disclosure may be implemented as an apparatus mounted on any type of moving object, including automobiles, electric vehicles, hybrid vehicles, motorcycles, bicycles, personal mobile devices, airplanes, drones, boats, robots, and the like.

Fig. 18 is a block diagram showing a schematic configuration example of a vehicle control system, which is one example of a moving object control system to which the technique according to the present disclosure can be applied.

The vehicle control system 12000 includes a plurality of electronic control units connected via a communication network 12001. In the example shown in fig. 18, the vehicle control system 12000 includes a drive system control unit 12010, a vehicle body system control unit 12020, an outside-vehicle information detection unit 12030, an inside-vehicle information detection unit 12040, and an integrated control unit 12050. Further, as a functional configuration of the integrated control unit 12050, a microcomputer 12051, a voice and image output unit 12052, and an in-vehicle network interface (I/F)12053 are shown.

The drive system control unit 12010 controls the operations of the devices related to the vehicle drive system according to various programs. For example, the drive system control unit 12010 functions as a control device of a drive force generation device (e.g., an internal combustion engine or a drive motor) for generating drive force of the vehicle, a drive force transmission mechanism for transmitting the drive force to wheels, a steering mechanism for adjusting a steering angle of the vehicle, a brake device for generating brake force of the vehicle, and the like.

The vehicle body system control unit 12020 controls the operations of various devices mounted on the vehicle body according to various programs. For example, the vehicle body system control unit 12020 functions as a control device for: a keyless entry system, a smart key system, a power window device, or various lights including a head lamp, a tail lamp, a brake lamp, a direction indicator, a fog lamp, and the like. In this case, radio waves transmitted from the portable device that replaces the key or signals from various switches may be input to the vehicle body system control unit 12020. The vehicle body system control unit 12020 receives input of these radio waves or signals and controls the door lock device, power window device, lamp, and the like of the vehicle.

Vehicle exterior information detection section 12030 detects information on the exterior of the vehicle to which vehicle control system 12000 is attached. For example, the image capturing unit 12031 is connected to the vehicle exterior information detecting unit 12030. The vehicle exterior information detection unit 12030 causes the image capturing unit 12031 to capture an image of the outside of the vehicle, and receives the captured image. The vehicle exterior information detection unit 12030 may perform object detection processing or distance detection processing on a person, a car, an obstacle, a sign, a character on a road surface, or the like based on the received image.

The image capturing unit 12031 is an optical sensor that receives light and outputs an electric signal according to the amount of the received light. The image capturing unit 12031 may output the electric signal as an image or as distance measurement information. Further, the light received by the image capturing unit 12031 may be visible light or invisible light, including infrared rays and the like.

The in-vehicle information detection unit 12040 detects information inside the vehicle. For example, a driver state detection unit 12041 that detects the state of the driver is connected to the in-vehicle information detection unit 12040. The driver state detection unit 12041 may include, for example, a camera that captures an image of the driver. The in-vehicle information detecting unit 12040 may calculate the degree of fatigue or the degree of concentration of attention of the driver based on the detection information input from the driver state detecting unit 12041, or determine that the driver is not dozing.

The microcomputer 12051 can calculate a control target value of the driving force generation device, the steering mechanism, or the brake device based on the information of the inside and the outside of the vehicle acquired by the outside information detection unit 12030 or the inside information detection unit 12040, and output a control instruction to the driving system control unit 12010. For example, the microcomputer 12051 may execute cooperative control intended to realize the function of an Advanced Driving Assistance System (ADAS), including avoidance of a collision of a vehicle or mitigation of a collision, follow-up driving based on a distance between vehicles, driving with a maintained vehicle speed, vehicle collision warning, lane departure warning, and the like.

Further, the microcomputer 12051 can perform cooperative control of automatic driving or the like intended for a vehicle to autonomously run without depending on the operation of the driver by controlling the driving force generation device, the steering mechanism, the brake device, and the like based on the information around the vehicle acquired by the outside-vehicle information detection unit 12030 or the inside-vehicle information detection unit 12040.

Further, the microcomputer 12051 can output a control command to the vehicle body system control unit 12020 based on the information outside the vehicle acquired by the vehicle exterior information detecting unit 12030. For example, the microcomputer 12051 may perform cooperative control aimed at preventing glare, such as controlling headlights and switching high beams to low beams according to the position of a preceding vehicle or an oncoming vehicle detected by the vehicle exterior information detecting unit 12030.

The voice and image output unit 12052 transmits an output signal of at least one of voice or image to an output device that can visually or audibly notify information to an occupant of the vehicle or the outside of the vehicle. In the example of fig. 18, an audio speaker 12061, a display unit 12062, and an instrument panel 12063 are shown as output devices. The display unit 12062 may include, for example, at least one of an in-vehicle display and a flat-view display.

Fig. 19 is a diagram showing an example of the mounting position of the image capturing unit 12031.

In fig. 19, a vehicle 12100 includes image capturing units 12101, 12102, 12103, 12104, and 12105 as the image capturing unit 12031.

For example, image capturing units 12101, 12102, 12103, 12104, and 12105 are provided at positions such as the front nose, side mirrors, rear bumper, rear door, and the upper portion of a windshield in the vehicle cabin of the vehicle 12100. An image capturing unit 12101 provided at the nose and an image capturing unit 12105 provided at the upper portion of a windshield in the vehicle compartment mainly acquire images in front of the vehicle 12100. The image capturing unit 12102 and the image capturing unit 12103 provided in the side view mirror mainly acquire a lateral image of the vehicle 12100. An image capturing unit 12104 provided in a rear bumper or a rear door mainly acquires an image behind the vehicle 12100. The front images captured by the image capturing unit 12101 and the image capturing unit 12105 are mainly used to detect a preceding vehicle, a pedestrian, an obstacle, a traffic signal, a traffic sign, a lane, and the like.

Note that fig. 19 shows one example of image capturing ranges of the image capturing units 12101 to 12104. The image capturing range 12111 indicates the image capturing range of the image capturing unit 12101 disposed in the front nose. The image capturing range 12112 and the image capturing range 12113 indicate the image capturing ranges of the image capturing unit 12102 and the image capturing unit 12103 provided in the side view mirror, respectively. The image capturing range 12114 indicates the image capturing range of the image capturing unit 12104 provided in the rear bumper or the rear door. For example, the image data captured by the image capturing units 12101 to 12104 are superimposed, thereby obtaining a bird's eye view image of the vehicle 12100 viewed from above.

At least one of the image capturing units 12101 to 12104 may have a function of acquiring distance information. For example, at least one of the image capturing units 12101 to 12104 may be a stereo camera including a plurality of image capturing elements or an image capturing element having pixels for detecting a phase difference.

For example, based on the distance information obtained from the image capturing unit 12101 to the image capturing unit 12104, the microcomputer 12051 determines the distance to each three-dimensional object in the image capturing ranges 12111 to 12114 and the temporal change in the distance (relative speed with respect to the vehicle 12100), thereby specifically extracting, as the preceding vehicle, the three-dimensional object that is closest on the traveling path of the vehicle 12100 and that travels in substantially the same direction as the vehicle 12100 at a predetermined speed (e.g., 0km/h or higher). Further, the microcomputer 12051 can set a vehicle distance to be secured before the preceding vehicle, and perform automatic braking control (including following stop control), automatic acceleration control (including following start control), and the like. In this way, it is possible to perform cooperative control of automatic driving intended for autonomous traveling of the vehicle without depending on the operation of the driver.

For example, based on the distance information obtained from the image capturing units 12101 to 12104, the microcomputer 12051 will extract three-dimensional object data on a three-dimensional object and classify it into a two-wheeled vehicle, a general vehicle, a large vehicle, a pedestrian, a utility pole, and other three-dimensional objects, and use the data for automatic obstacle avoidance. For example, the microcomputer 12051 distinguishes obstacles around the vehicle 12100 into obstacles that can be visually recognized by the driver of the vehicle 12100 and obstacles that are difficult to visually recognize. Then, the microcomputer 12051 determines a collision risk indicating the risk of collision with each obstacle. When the risk of collision is equal to or higher than the set value and there is a possibility of collision, the microcomputer 12051 may output a warning to the driver via the audio speaker 12061 or the display unit 12062 or provide driving assistance for avoiding a collision by performing forced deceleration or avoidance steering via the drive system control unit 12010.

At least one of the image capturing units 12101 to 12104 may be an infrared camera that detects infrared rays. For example, the microcomputer 12051 may recognize a pedestrian by determining whether or not a pedestrian is present in the captured images of the image capturing units 12101 to 12104. Such pedestrian recognition is performed, for example, by the following process: a process of extracting feature points in captured images of the image capturing units 12101 to 12104 serving as infrared cameras, and a process of performing pattern matching processing on a series of feature points indicating the outline of an object to determine whether the object is a pedestrian. In the case where the microcomputer 12051 recognizes a pedestrian by determining that there is a pedestrian in the captured images of the image capturing units 12101 to 12104, the voice and image output unit 12052 controls the display unit 12062 so that a rectangular outline for emphasis is superimposed on the recognized pedestrian. Further, the voice and image output unit 12052 may control the display unit 12062 to display an icon or the like indicating a pedestrian at a desired position.

One example of a vehicle control system to which the technique according to the present disclosure can be applied has been described above. The technique according to the present disclosure can be applied to the vehicle exterior information detection unit 12030 and the vehicle interior information detection unit 12040 in the above-described configuration. Specifically, by using the distance measurement by the distance measurement module 11 as the outside-vehicle information detection unit 12030 or the inside-vehicle information detection unit 12040, it is possible to perform processing for recognizing the posture of the driver, perform various operations (e.g., an audio system, a navigation system, an air conditioning system) according to the posture, and more accurately detect the state of the driver. Further, by using the distance measurement by the distance measuring module 11, it is possible to recognize the unevenness of the road surface and reflect the unevenness on the control of the suspension.

Note that the present technology can be applied to a method of performing amplitude modulation on light projected onto an object other than the indirect ToF method, which is referred to as a continuous wave method. Further, depending on the structure of the photodiode 31 of the light receiving unit 14, the present technology may be applied to a distance measuring sensor having a structure of distributing charges to two charge storage units, such as a Current Assisted Photon Demodulator (CAPD) structure distance measuring sensor or a gate method distance measuring sensor alternately applying pulses and charges of the photodiode to two gates.

The embodiments of the present technology are not limited to the above-described embodiments, and various modifications may be made without departing from the spirit of the present technology.

The above-described techniques may be implemented independently, so long as there is no conflict. Of course, any number of the present techniques may be used in combination. For example, some or all of the present techniques described in any embodiment may be combined with some or all of the present techniques described in additional embodiments. Additionally, some or all of any of the present techniques described above may also be implemented in conjunction with other techniques not described above.

Also, for example, a configuration described as one apparatus (or processing unit) may be divided and configured as a plurality of apparatuses (or processing units). In contrast, the configuration described above as a plurality of apparatuses (or processing units) may be configured collectively as one apparatus (or processing unit). Further, of course, configurations other than the above-described configuration may be added to the configuration of each apparatus (or each processing unit). Further, if the configuration or operation of the entire system is substantially the same, a part of the configuration of one device (or processing unit) may be included in the configuration of another device (or other processing unit).

Further, in this specification, a system refers to a collection of a plurality of components (devices, modules (parts), etc.), and it is not important whether all the components are in the same housing. Therefore, a plurality of devices accommodated in separate housings and connected via a network and one device in which a plurality of modules are accommodated in one housing are both systems.

Further, for example, the above-described program may be executed in any device. In this case, the apparatus needs to have at least necessary functions (function blocks and the like) capable of obtaining necessary information.

It should be noted that the effects described in this specification are merely illustrative and not restrictive, and effects other than the effects described in this specification may be produced.

Note that the present technology may have the following configuration.

(1)

A signal processing apparatus comprising:

an estimating unit that estimates, in a pixel including a first tap that detects electric charges photoelectrically converted by a photoelectric converting unit and a second tap that detects electric charges photoelectrically converted by the photoelectric converting unit, a sensitivity difference between taps of the first and second taps by using first to fourth detection signals obtained by detecting reflected light generated by reflection of the emitted light by an object with respect to the emitted light at first to fourth phases.

(2)

The signal processing apparatus according to the above (1), wherein,

the estimation unit calculates an offset and a gain of the second tap with respect to the first tap as the sensitivity difference between taps.

(3)

The signal processing apparatus according to the above (2), wherein,

the estimation unit calculates an offset and a gain of the second tap with respect to the first tap under a condition that phases of the first tap and the second tap are out of phase by 180 degrees.

(4)

The signal processing apparatus according to the above (2) or (3), further comprising:

an amplitude estimation unit that estimates amplitudes of the first to fourth detection signals,

wherein the estimation unit updates the offset and the gain by mixing the calculated offset and gain with a current offset and gain based on the estimated magnitude.

(5)

The signal processing apparatus according to any one of the above (2) to (4), further comprising:

a motion amount estimation unit that estimates a motion amount of the object in the pixel,

wherein the estimation unit updates the offset and the gain by mixing the calculated offset and gain with a current offset and gain based on the estimated magnitude and the movement amount.

(6)

The signal processing apparatus according to any one of the above (1) to (5), further comprising:

a correction processing unit that performs correction processing for correcting the first detection signal and the second detection signal, which are the latest two detection signals among the first detection signal to the fourth detection signal, by using the following parameters: wherein the sensitivity difference is estimated using the parameter.

(7)

The signal processing apparatus according to the above (6), further comprising:

a 2-phase processing unit that calculates an I signal and a Q signal of a 2-phase method by using the first detection signal and the second detection signal after correction processing;

a 4-phase processing unit that calculates an I signal and a Q signal of a 4-phase method by using the first to fourth detection signals;

a mixing processing unit that mixes the I signal and the Q signal of the 2-phase method with the I signal and the Q signal of the 4-phase method, and calculates the mixed I signal and Q signal; and

a calculation unit that calculates distance information to the object based on the mixed I and Q signals.

(8)

The signal processing apparatus according to the above (7), wherein,

the mixing processing unit mixes the I signal and the Q signal of the 2-phase method with the I signal and the Q signal of the 4-phase method based on the magnitudes of the first to fourth detection signals and the amount of movement of the object in the pixel.

(9)

The signal processing apparatus according to the above (7) or (8), wherein,

the calculation unit calculates distance information to the object each time detection signals of two phases among the first to fourth detection signals are updated.

(10)

A signal processing method comprising, by a signal processing apparatus:

in a pixel including a first tap that detects electric charges photoelectrically converted by a photoelectric conversion unit and a second tap that detects electric charges photoelectrically converted by the photoelectric conversion unit, a sensitivity difference between taps of the first and second taps is estimated by using first to fourth detection signals obtained by detecting reflected light generated by reflection of the emitted light by an object with respect to the emitted light in first to fourth phases.

(11)

A distance measurement module comprising:

a light receiving unit in which pixels are two-dimensionally arranged, each pixel including a first tap that detects electric charges photoelectrically converted by a photoelectric conversion unit and a second tap that detects electric charges photoelectrically converted by the photoelectric conversion unit; and

a signal processing unit including an estimation unit that estimates, in the pixel, a sensitivity difference between taps of the first and second taps by using first to fourth detection signals obtained by detecting reflected light generated by reflection of the emitted light by an object with respect to the emitted light at first to fourth phases.

(12)

The distance measuring module according to the above (11), wherein,

each of the pixels receives the reflected light obtained by emitting the emitted light at a plurality of frequencies, and

the estimation unit estimates the sensitivity difference between taps at each of the plurality of frequencies.

(13)

The distance measuring module according to the above (11) or (12), wherein,

each of the pixels receives the reflected light obtained by emitting the emitted light for a plurality of exposure times, and

the estimation unit estimates the sensitivity difference between taps at each of the plurality of exposure times.

(14)

The distance measurement module according to any one of the above (11) to (13),

the light receiving unit is driven to cause a first pixel to receive the reflected light at a first phase and simultaneously cause a second pixel to receive the reflected light at a second phase, next, to cause the first pixel to receive the reflected light at a third phase and simultaneously cause the second pixel to receive the reflected light at a fourth phase, and

the estimation unit estimates a sensitivity difference between taps of the first tap and the second tap by using the first to fourth detection signals detected at the first to fourth phases.

List of reference numerals

11 distance measuring means, 13 light emission control means, 14 light receiving means, 15 signal processing means, 21 pixels, 18 reference signal generating means, 18a, 18B DAC, 18c control means, 21 pixels, 22 pixel array means, 23 drive control means, 31 photodiode, 32A first tap, 32B second tap, 61 correction processing means, 622 phase processing means, 634 phase processing means, 64 motion estimation means, 65 amplitude estimation means, 66 fixed pattern estimation means, 67 hybrid processing means, 68 phase calculation means, 81 coefficient calculation means, 82 coefficient updating means, 83 coefficient storage means, 101 smartphone, 102 distance measuring means, 201 CPU, 202 ROM, 203 RAM.

43页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:距离测量设备、距离测量系统、距离测量方法和程序

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!