Time-of-flight camera

文档序号:1549342 发布日期:2020-01-17 浏览:6次 中文

阅读说明:本技术 飞行时间摄像机 (Time-of-flight camera ) 是由 斯特凡·乌尔里希 卢茨·海涅 于 2018-04-03 设计创作,主要内容包括:本发明涉及一种飞行时间摄像机(1),包括:具有用于确定发射和捕获光(Sp2)的相移的多个飞行时间像素(23)的飞行时间传感器(22),根据检测的相移(<Image he="55" wi="76" file="DDA0002302022110000011.GIF" imgContent="drawing" imgFormat="GIF" orientation="portrait" inline="no"></Image>)来确定距离值(d),其特征在于,飞行时间摄像机(20)具有存储器,表示飞行时间摄像机(20)和飞行时间传感器(22)特征的点扩展函数(PSF)的特征存储在存储器中;评估单元,其被设计成根据存储的点扩展函数(PSF)在傅立叶空间中布置检测的复值图像(I(x)),且确定由散射光校正的复值图像(I<Sub>0</Sub>(x)),且使用校正的复值图像(I<Sub>0</Sub>(x))确定相移(I<Sub>0</Sub>(x))或距离值(d)。(The invention relates to a time-of-flight camera (1) comprising: a time-of-flight sensor (22) having a plurality of time-of-flight pixels (23) for determining a phase shift of the emitted and captured light (Sp2), based on the detected phase shift (Sp2) ) To determine a distance value (d), characterized in that the time-of-flight camera (20) has a memory which represents the time-of-flight camera (20) and the time of flightThe characteristics of the Point Spread Function (PSF) characteristic of the intermediate sensor (22) are stored in a memory; an evaluation unit which is designed to arrange the detected complex-valued image (I (x)) in Fourier space according to a stored point-spread function (PSF) and to determine a complex-valued image (I (x)) corrected by scattered light 0 (x) And using the corrected complex-valued image (I) 0 (x) Determining the phase shift (I) 0 (x) Or a distance value (d).)

1. A time-of-flight camera (20) for a time-of-flight camera system (1), comprising a time-of-flight sensor (22), the time-of-flight sensor (22) comprising a plurality of time-of-flight pixels (23) for determining a phase shift of the transmitted and received light (Sp2), wherein the phase shift is based on the detection

Figure FDA0002302022080000011

it is characterized in that the preparation method is characterized in that,

the time-of-flight camera (20) comprises a memory in which at least Parameters (PSF) of a point spread function are stored, wherein the Point Spread Function (PSF) takes into account scattered light behavior and signal cross-talk of the time-of-flight camera (20) and the time-of-flight sensor (22);

comprising an evaluation unit configured to cause a deconvolution of the detected image (I (x)) and to determine a corrected image (I) based on the stored Point Spread Function (PSF)0(x) And are) and

wherein from the corrected image (I)0(x) To determine the phase shift

Figure FDA0002302022080000012

2. Time-of-flight camera (20) according to claim 1, characterized in that the Point Spread Function (PSF) is complex valued.

3. The time-of-flight camera (20) according to any one of the preceding claims, characterized in that the deconvolution of the detected image (i (x)) and the stored Point Spread Function (PSF) is performed in fourier space.

4. Time of flight camera (20) according to any one of the preceding claims, characterized in that the resolution of the detected image (i (x)) is reduced and a correction (Δ i (x)) is determined with the reduced resolution, whereupon the correction (Δ i (x)) is scaled up to the original resolution of the detected image (i (x)) and the detected image (i (x)) is corrected with the scaled-up correction (Δ i (x)).

5. Time-of-flight camera (20) according to claim 4, characterized in that the reduction of resolution is achieved by averaging the amplitudes of adjacent pixels and the scaling up is performed by repetition of the amplitudes.

6. Time-of-flight camera (20) according to any one of the preceding claims, characterized in that the Point Spread Function (PSF) is stored in the memory as a matrix or a look-up table.

7. Time-of-flight camera (20) according to any one of the preceding claims, characterized in that the point-spread function (PSF) is stored in the memory as a fourier transform.

8. Time-of-flight camera (20) according to one of the preceding claims, characterized in that the Point Spread Function (PSF) is stored on an external device, the phase shift being

Figure FDA0002302022080000021

9. Time-of-flight camera (20) according to any one of the preceding claims, characterized in that the Point Spread Function (PSF) stored in the memory is determined according to the method of any one of claims 7 to 12.

10. A method of determining a point spread function, wherein a point light source (112) and a time-of-flight camera (20) are arranged such that a time-of-flight sensor (22) of the time-of-flight camera (20) detects the point light source (112),

wherein the distance between the point light source (112) and the time-of-flight camera (20) and/or the beam profile of the point light source (112) is selected such that less than 5 time-of-flight pixels (23) or at most 16 x 16 pixels are illuminated in a pixel row or column on the time-of-flight sensor (22),

wherein the Point Spread Function (PSF) is determined based on at least a subset of the time-of-flight pixels (23) of the time-of-flight sensor (22).

11. The method of claim 10 wherein the point light source operates unmodulated.

12. A method according to claim 11, characterized by driving the modulation gates (Gam, Gbm) of the time-of-flight pixels (23) of the time-of-flight sensor (22) such that carriers in the time-of-flight pixels (23) accumulate predominantly at only one integration node (Ga, Gb).

13. The method of claim 10, wherein the point light source (112) and the time-of-flight sensor (22) are driven in phase with a modulation signal and sensor difference signals for at least three different phase positions are determined.

14. Method according to any of the preceding methods, characterized in that at least two image frames with different integration times of the time-of-flight sensor (22) and/or different light intensities (112) of the point light source are detected to determine the point spread function.

Technical Field

The invention relates to a time-of-flight camera and a method for detecting a point spread function to correct a detected signal of a time-of-flight sensor.

Background

Time-of-flight cameras or time-of-flight camera systems relate in particular to all time-of-flight or 3D time-of-flight camera systems which derive time-of-flight information from the phase shift of the transmitted and received radiation. As described in DE19704496C2, time-of-flight or 3D time-of-flight cameras (in particular hybrid light detector cameras) are suitable since they comprise a hybrid light detector (PMD) and are available as Frame ribbon 03D or as camtube from the company "if electronics gmbh" or "pmdttechnology ag". The PMD camera in particular allows a flexible arrangement of the light source and the detector, both of which can be arranged in the housing and independently.

Disclosure of Invention

It is an object of the invention to further improve the compensation of phase errors.

The object is achieved in an advantageous manner by a time-of-flight camera system according to the invention, as set forth in the independent claims.

Of particular advantage is a time-of-flight camera for a time-of-flight camera system, provided with a time-of-flight sensor comprising a plurality of time-of-flight pixels for determining a phase shift of the emitted light and the received light, wherein a distance value is determined on the basis of the detected phase shift, wherein the time-of-flight camera comprises a memory in which at least parameters of a point spread function are stored, wherein the point spread function takes into account the scattered light behavior and the signal crosstalk of the time-of-flight camera and the time-of-flight sensor, comprising an evaluation unit which is designed such that the detected image (i (x)) is deconvoluted and the detected image (i (x)) is deconvoluted on the basis of the storedDetermining a corrected image (I)0(x) And wherein the determination of the phase shift or distance value is at the corrected image (I)0(x) ) on the basis of the received signal.

This step has the advantage that the distance values can be corrected during operation on the basis of the stored point spread function.

Preferably, the point spread function is a complex valued function.

This is also useful when deconvolution of the detected image and the stored point spread function is done in fourier space.

In another embodiment, it is assumed that the resolution of the detected image is reduced, and correction is determined by such reduced resolution, and thereafter, the detected image is corrected by being scaled up to the original resolution of the detected image and corrected by the scaled-up correction.

Therefore, the calculation amount of correction can be significantly reduced.

It is further contemplated that the reduction in resolution is performed by averaging the amplitudes of adjacent pixels and the scaling up is performed by repeating the amplitudes.

Preferably, the point spread function is stored in the memory in the form of a matrix or look-up table and/or a fourier transform.

It is particularly useful if the point spread function stored in the memory is determined according to one of the following methods.

Preferably, a method for determining a point spread function is provided, wherein the point light source and the time-of-flight camera are arranged such that the time-of-flight sensor of the time-of-flight camera detects the point light source, wherein the distance between the point light source and the time-of-flight camera and/or the beam profile of the point light source is selected such that less than 5 time-of-flight pixels or at most 16 × 16 pixels are illuminated in a pixel row or column, wherein the point spread function is determined based on at least a subset of the time-of-flight pixels of the time-of-flight sensor.

An advantage of this step is that the light source can be constructed in a simple manner within a certain range in order to determine the point spread function.

In one embodiment, operating an unmodulated point source is provided.

In this case, the modulation gates of the time-of-flight pixels of the time-of-flight sensor are driven such that the charge carriers in the time-of-flight pixels accumulate predominantly only at one integration node. This step ensures that the generated photoelectrons are preferably collected at an integration node.

According to another embodiment, driving the point source and the time-of-flight sensor in phase with the modulation signal is provided, and a sensor difference signal with respect to at least three different phase positions is determined.

It is particularly useful to provide at least two image frames with different integration times of the time-of-flight sensor and/or different light intensities of the point light source to determine the point spread function.

In another embodiment, a method for determining a point spread function of a time-of-flight camera system is provided, wherein a first 3D image I of a reference scene is detected by the time-of-flight camera1(x) And a second 3D image I with objects in the foreground of the reference scene2(x) Wherein the second 3D image I2(x) Or a second 3D image I2(x) Is corrected by a point spread function and is based on the first and corrected second 3D image I'2(x) The parameters of the point spread function are varied until at least two images (I) in selected partial areas are obtained1(x),I’2(x) ) is minimal and/or is below a threshold, wherein the resulting point spread function can be reused as a correction point spread function.

Likewise, a method for determining a point spread function of a time-of-flight camera system may be provided, wherein a single image I (x) of a reference scene with objects in the foreground is detected by the time-of-flight camera under the assumption that the reference scene is formed as a plane, wherein the single image I (x) is corrected by a first point spread function, wherein parameters of the first point spread function are changed to determine a corrected point spread function until a corrected image I '(x) and an expected image I' (x) are corrected0(x) The difference between is minimal and/or below a threshold.

In a further embodiment, a method for determining a point spread function of a time-of-flight camera system is provided, wherein a 3D image I of a step of a reference object is detected by the time-of-flight cameraT(x) Wherein a reference object having steps of a defined height, the surfaces of which are planar and arranged parallel to each other, is arranged relative to the time-of-flight camera such that at the edges of the steps there is a jump in distance to the more distant step level, wherein the detected 3D image IT(x) Is first corrected using a first model point spread function, wherein the 3D image I 'when so corrected'T(x) Is determined, the parameters of the model point spread function are changed until the corrected 3D image I 'when the distance value D of (D) exceeds the maximum allowable distance error'T(x) Is at a minimum and/or below an allowable distance error, wherein the resulting point spread function can be reused as a correction point spread function.

Drawings

The invention will be explained in more detail below by means of exemplary embodiments with reference to the drawings.

In the figure:

FIG. 1 schematically illustrates a time-of-flight camera system;

FIG. 2 shows modulation integrals of generated carriers;

FIG. 3 illustrates an arrangement for determining a point spread function;

FIG. 4 shows a cross-section of an image for determining a point spread function;

FIG. 5 illustrates detection of a reference scene;

FIG. 6 illustrates detection of an object in front of a reference scene;

FIG. 7 shows measured distance values versus actual distances according to FIG. 6;

FIG. 8 illustrates the detection of reference surfaces at two different distances;

FIG. 9 shows measured distance values versus actual distances according to FIG. 8; and

fig. 10 shows a possible schematic flow of scattered light correction in the sense of the present invention.

Detailed Description

In the following description of the preferred embodiments, like reference characters designate the same or similar components.

Fig. 1 shows a measurement of an optical distance with a time-of-flight camera, for example, as known from DE19704496a 1.

The time-of-flight camera system 1 comprises a transmitting unit or light emitting module 10, said transmitting unit or light emitting module 10 comprising a light source 12 and associated beam shaping optics 15, and a receiving unit or time-of-flight camera 20 with receiving optics 25 and a time-of-flight sensor 22.

The time-of-flight sensor 22 has at least one time-of-flight pixel, preferably an array of pixels, and is designed in particular as a PMD sensor. The receiving optics 25 are typically composed of multiple optical elements to improve imaging characteristics. The beam shaping optics 15 of the emission unit 10 may be formed as e.g. reflectors or lens optics. In a very simple embodiment, the optional optical elements may be distributed on both the receive side and the transmit side.

The measuring principle of this arrangement is basically based on the fact that, based on the phase shift of the emitted and received light, the time of flight, and thus the distance traveled by the received light, can be determined. To this end, light source 12 and time-of-flight sensor 22 are jointly provided with a base phase position via modulator 30

Figure BDA0002302022090000041

Specific modulation signal M0. Furthermore, in the example shown, a phase shifter 35 is provided between the modulator 30 and the light source 12, wherein the phase shifter is capable of shifting the modulation signal M of the light source 120Base phase of

Figure BDA0002302022090000042

Moved by a predetermined phase position

Figure BDA0002302022090000043

For typical phase measurements, it is preferred to use the phase position

In dependence on the adjusted modulation signal, the light source 12 emits light having a first phase position p1 or

Figure BDA0002302022090000051

Intensity modulated signal S ofp1. Such a signal Sp1Or electromagnetic radiation is reflected by the object 40 in the shown case and has a second phase position due to the distance traveled 2d

Figure BDA0002302022090000052

Phase shift of

Figure BDA0002302022090000053

Time of flight tL3As a received signal Sp2Impinges on a time-of-flight sensor 22. In the time-of-flight sensor 22, the modulation signal M0And a received signal Sp2Mixing, where the phase shift or object distance d is determined from the resulting signal.

Preferably, an infrared light emitting diode or surface emitter (VCSEL) is suitable as the light emitting source or light source 12. Of course, other radiation sources in other frequency ranges are conceivable, in particular light sources in the visible frequency range are conceivable.

The basic principle of phase measurement is schematically illustrated by way of example in fig. 2. The upper curve shows modulation signal M driving light source 12 and time-of-flight sensor 220Timing of (2). The light reflected by the object 40 according to its time of flight tLBy phase shiftingAs a received signal Sp2Striking time-of-flight sensor 22. Modulation signal M of time-of-flight sensor 22 at first integration node Ga0And collects the photon-generated charge q over several modulation cycles in the phase positions shifted by 180 deg. of the second integration node Gb. Time of flight for directing charge onto the integration nodeThe pixel 23 of the sensor 22 comprises at least two modulation gates Gam, Gbm which direct charges onto the first or second integration nodes Ga, Gb in accordance with an applied modulation signal. The difference of the charges qa, qb collected from the first and second integration nodes Ga, Gb takes into account all phase differencesThe phase shift of the object can be determined

Figure BDA0002302022090000056

And thus the distance d of the object.

Fig. 3 schematically shows an arrangement for determining the point spread function PSF. Here, light source 112 and time-of-flight sensor 22 may be operated unmodulated or at least have a predetermined modulation frequency. When using unmodulated light, it is advantageous if the time-of-flight sensor 22 or the pixels 23 also operate unmodulated. In this case, it is useful if a constant voltage is applied to the modulation gates Gam, Gbm of the pixels 23 so that the photogenerated charges are mainly collected in only one integration node Ga, Gb.

For determining the PSF, it is preferred that the light source 112 substantially illuminates only one pixel 23 of the time-of-flight sensor 22, preferably a pixel 23 smaller than 3 × 3 and in particular smaller than 5 × 5. To provide such a spot, an aperture 150 having a sufficiently small aperture opening 152 is provided in front of the light source 112. The original optical signal I resulting from the aperture 1500In the course of reaching the sensor up to the detected image signal i (x) is influenced by various influences, for example by the properties of the optical system or optics 25 or by reflections between the sensor 22 and the optics 25. In addition, intrinsic characteristics of the sensor 22 itself play a role, such as signal cross-talk or electron diffusion between the pixels 23. Thus, the image signal i (x) detected at the sensor may be considered as incident lightI0Convolution with the point spread function PSF, which includes substantially all of the characteristics of the overall system. The detected image signal i (x) substantially corresponds to the point spread function PSF due to the singular luminescence of one or several pixels. For determining the point spread function, all images are preferably evaluatedThe elements were evaluated. In principle, however, it is also conceivable to evaluate only a partial region around the singular light-emitting pixels.

If desired, the quality of the point spread function PSF may be improved if several point spread functions are determined based on a plurality of singular luminous pixels 23. For example, it is useful to additionally illuminate the pixels 23 outside the optical axis in order to determine further point spread functions at these positions. On the basis of the determined point spread function, a point spread function for subsequent correction can be determined.

Since the mentioned electron diffusion typically occurs at a diffusion rate that is significantly lower than the propagation of light, the electrons reach the neighboring pixels with a time delay, so that the effect of the electron diffusion can also be observed as a phase shift. Therefore, the point spread function PSF also includes a complex-valued fraction. Therefore, to more accurately determine these quantities, it is advantageous to operate the light sources 112 at different phase positions.

Since the point spread function typically has a high dynamic over a few powers of ten, it is also advantageous to detect the PSF to operate point sources 112 with different intensities and/or sensors 22 with different integration times.

To compensate for dark current, it is beneficial to detect the image signal i (x) when the light source 112 is turned on and off.

From the sum of all measured values a model of the point spread function can be generated, which model is applicable to all pixels 23.

Such a model may be generated based on the following considerations: since the measured PSF is noisy and may contain artifacts that are e.g. very specific to the pixel position on the sensor, a "clean" PSF is obtained e.g. by fitting the measured PSF to a suitable model. For example, as a model

Figure BDA0002302022090000061

For example, wherein

Figure BDA0002302022090000062

May be selected.

In this case, the amount of the solvent to be used,

Figure BDA0002302022090000063

center pixel representing pixel-by-pixel and PSF

Figure BDA0002302022090000064

Is determined by the distance vector of (a),

Figure BDA0002302022090000065

to represent

Figure BDA0002302022090000072

P-norm of (d). For example, in a perfectly radially symmetric PSF, P-2 would result. Since the PSF does not have to be radially symmetrical, but can be, for example, diamond-shaped, better results are obtained with p ≠ 2. By appropriate selection of the p-norm, the anisotropy of the PSF can be taken into account.

Since most of the light impinges on the central pixel of the PSF, it is beneficial to add a local narrow function b (r) to the model that reflects this fraction. This may be, for example, a Dirac Delta function or Gaussian function describing, for example, lens blur.

For efficiency reasons, it is beneficial to describe the PSF in the form of a spline curve. To describe the phase shift with this PSF, for example, the spline has a complex-valued score in addition to the actual score. This also makes the PSF complex. Then, the appropriate fitting parameters are, for example, the values at the spline nodes, the normalization parameters p and pBAnd a parameter specifying the shape of b (r). During software initialization, it is beneficial to store only the necessary parameters to generate the PSF from these parameters, rather than storing the entire PSF.

During operation of the time-of-flight camera, distance values relating to the influence of scattered light can be corrected on the basis of the stored parameters and the PSF generated thereby.

By means of the described arrangementPreferably with a short exposure time tkFirst image of

Figure BDA0002302022090000073

In particular, the exposure time should be chosen such that no pixel is in saturation. In the case of using modulated light, no pixels in the original image obtained may be saturated.

In addition, it is also detected that there is a long exposure time tlSecond image of (2)

Figure BDA0002302022090000074

The exposure time should be selected such that the proportion of the PSF caused by scattered light and/or crosstalk/signal crosstalk is as completely visible as possible, i.e. is not influenced by noise. The exposure time is here typically 1000-10000 times greater than the first image.

In the image detection process, either unmodulated light can be used or the light source and sensor can be modulated in the usual way. In the latter case, the image

Figure BDA0002302022090000075

And

Figure BDA0002302022090000076

is complex-valued as usual, and therefore contains phase information, which reflects the time from light emission to the reception of the generated electrons at the gate of the sensor

For both images, it may be useful to detect a series of images rather than one and average them to further reduce noise.

For example, to obtain a consistent value between the first and second images, the brightness (or amplitude) is normalized by the different integration times:

Figure BDA0002302022090000081

in the obtained image, the light-emitting center pixel

Figure BDA0002302022090000082

Is usually still unknown. To determine the central pixel

Figure BDA0002302022090000083

For the first image, e.g. by thresholding

Figure BDA0002302022090000084

Binarization is performed such that bright LED spots should result in a continuous area.

The centre of the continuous region being a central pixel or point on the sensor directed towards the light source

Figure BDA0002302022090000085

Good assessment of. The center pointDoes not have to fall on the center of the pixel, i.e. the found center pointNeed not be an integer.

Now short exposure imageFit to the model of the sharp point. Such a model is used, for example, in equation (1)

Figure BDA0002302022090000089

And (4) showing. In this context, in particular

Figure BDA00023020220900000810

Wherein P isBIs a parameter of the function B (r), pBIs a parameter of the norm. For example, B (r) ═ B may be selected0exp(-br2) Wherein PB is ═ B0,b)。

For numerical minimization according to equation (4), there are many algorithms, such as the Nold-Mead method.

Except that PBAnd pBBesides, if the center of the light source is to be

Figure BDA00023020220900000811

Included in the optimization, better results can be obtained in equation (4). Then, the value previously found from the binarized image will be suitable as the starting value.

Now we consider a second image with a long exposure time

Figure BDA00023020220900000812

Similar to equation (4), the image is fitted to a model of the scattered light characteristics, in equation (1)

Figure BDA00023020220900000813

Figure BDA00023020220900000814

The central part of the PSF described by b (r) may not be considered if necessary.

Similar to the first fit, where PAIs a function of a modelThe parameter (c) of (c). For example, the function of the form:

Figure BDA00023020220900000816

has proven suitable, where s (r) denotes a (real) spline curve. Function(s)

Figure BDA00023020220900000817

The phase delay of the incident light spot is described, which may be caused for example by phase cross talk/signal cross talk between pixels. Since this is not necessarily isotropic, it may be desirable to align the lens with the lens

Figure BDA00023020220900000818

Modeling as two dimensionsInstead of assuming a radially symmetric function as used for s (r), a function (e.g., a two-dimensional spline or a two-dimensional look-up table) is used.

In this case, the fitting parameter PAIs A0,pAAnd a function value of the spline at the node. If necessary, the position of the node may also be the fitting parameter PAA part of (a).

For example, by obtaining the parameter PAAnd PBAnd the PSF model according to equation (1), an artifact-free and noise-free PSF can now be generated. It may be beneficial to store only these or other suitable parameters from which the PSF may be generated during software initialization, rather than storing the complete PSF.

The images detected with different exposure times are preferably processed independently of one another:

Figure BDA0002302022090000091

sub-modelFor example, may correspond to different dynamic ranges of the PSF and may be fitted to the detected images independently of each other

Figure BDA0002302022090000093

Based on these fitting parameters, the PSF can be summarized according to equation (7).

In the above, the calibration has been described with reference to a point light source with an aperture as light source or light source system. Of course, the calibration is not limited to such light sources, but takes into account all light sources or light source systems that are capable of producing suitable light spots.

Fig. 4 to 9 show a further method for determining a suitable point spread function PSF. In the method according to fig. 4, a first 3D image I of a reference scene is detected1(x) And a second 3D image I with an object 40 in the foreground of the reference scene2(x) In that respect As already discussed, due to the influence of the system, it is expected to derive from the first 3D image I1(x) The change in the value of the distance is known. To is coming toA point spread function suitable for correction is determined and then the parameters of the first model PSF are changed until the difference between the first and second images, in particular the distance error, is minimal or less than a tolerance limit. In this case, preferably only image regions or partial regions thereof are considered, wherein the reference scene is visible in both images.

For example, the image may be detected as shown in fig. 5 and 6. In a first step, a first 3D image I of a reference scene is detected1(x) (FIG. 5). For example, as a reference scene, a wall or floor can be detected in a simple manner, but in principle any scene of an arbitrary height profile can be detected. In a second step according to fig. 6, an object is arranged above the reference scene, for example a hand or another object, and a second range image I is detected2(x) In that respect Also, the characteristics of the object are substantially non-critical. As described with reference to fig. 4, a corrected PSF may then be generated based on the difference between the two images.

Fig. 7 shows a variant in which the reference scene and the object are flat and arranged parallel to each other. With this a priori knowledge, the optimization of the PSF may be simplified.

Alternatively, for example, only one image of the target may be detected at a distance sufficiently far from a flat reference scene or plane (e.g., wall, table, floor), instead of two images. In order to determine the PSF, the parameters are now changed until the reference surface behind the object is as flat as possible, or the deviation of the corrected reference surface from a plane is less than the permissible limit.

It is particularly advantageous if the size and distance of the reference scene and/or the introduced object are known in advance.

Fig. 8 and 9 show another variation of the above steps. The object 40 shown in fig. 8 has a step defined in height. Height Δ d ═ dT2-dT1Preferably previously known. As in the above example, the parameters of the PSF model are varied until the distance error is at a minimum or below the allowable limit.

The raw image measured by the sensor (e.g., j ═ 0, 1, 2, 3 corresponding to phase positions 0 °, 90 °, 180 °, 270 °) is atIn mathematical sense, an unknown original image that is not distorted by scattered light

Figure BDA0002302022090000101

Convolution with PSF:

Figure BDA0002302022090000102

of interest for further processing is a complex-valued image (8)

I(x):=(D0(x)-D2(x))+i(D1(x)-D3(x))(9)

Since convolution is a linear operation, it is equally applicable to images similar to I (x) and complex-valued images I (x) that are not distorted by scattered light0(x):

I(x)=ΣΔx|0(x-Δx)·PSF(Δx) (10)

Or I (x) ═ I0(x)·PSF(x) (11)

The deconvolution is performed in fourier space. For this purpose, the Fourier transforms (F [. cndot.) are performed on I (x) and PSF]):

Figure BDA0002302022090000103

And

Figure BDA0002302022090000104

therefore, equation 4 becomes:

and thus

Figure BDA0002302022090000106

This keeps the image from being distorted by stray light

Figure BDA0002302022090000111

If the correction is equal to Δ I (x)0(x) I (x) the difference between the detected image and the corrected image, which is of interest, equation (13) can be rearranged as follows:

in which the treatment is similar to that described above

Figure BDA0002302022090000113

Is the fourier transform of the correction Δ i (x).

For performance reasons before the fourier transformation, such a correction can, for example, scale down or reduce the resolution and can be scaled up again to the original resolution after the scattered light correction. The correction thus obtained may then be added to the detected image I (x) in order to obtain a corrected image I0(x)。

Of course, the computational effort is reduced due to the reduction of data.

List of reference numerals

1 time-of-flight camera system

10 light emitting module

12 light source

15 beam shaping optics

20 receiver, time-of-flight camera

30 modulator

35 phase shifter and luminescence phase shifter

40 objects

Figure BDA0002302022090000121

Time-of-flight dependent phase shift

Figure BDA0002302022090000122

Phase position

Figure BDA0002302022090000123

Base phase

M0Modulating signals

p1 first phase

p2 second phase

Sp1Transmitting signal with first phase

Sp2Received signal having a second phase

tLTime of flight, light propagation time

Ga, Gb integration node

d object distance

q charge

17页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:光扫描器及检测器

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!