Time-of-flight camera
阅读说明:本技术 飞行时间摄像机 (Time-of-flight camera ) 是由 斯特凡·乌尔里希 卢茨·海涅 于 2018-04-03 设计创作,主要内容包括:本发明涉及一种飞行时间摄像机(1),包括:具有用于确定发射和捕获光(Sp2)的相移的多个飞行时间像素(23)的飞行时间传感器(22),根据检测的相移(<Image he="55" wi="76" file="DDA0002302022110000011.GIF" imgContent="drawing" imgFormat="GIF" orientation="portrait" inline="no"></Image>)来确定距离值(d),其特征在于,飞行时间摄像机(20)具有存储器,表示飞行时间摄像机(20)和飞行时间传感器(22)特征的点扩展函数(PSF)的特征存储在存储器中;评估单元,其被设计成根据存储的点扩展函数(PSF)在傅立叶空间中布置检测的复值图像(I(x)),且确定由散射光校正的复值图像(I<Sub>0</Sub>(x)),且使用校正的复值图像(I<Sub>0</Sub>(x))确定相移(I<Sub>0</Sub>(x))或距离值(d)。(The invention relates to a time-of-flight camera (1) comprising: a time-of-flight sensor (22) having a plurality of time-of-flight pixels (23) for determining a phase shift of the emitted and captured light (Sp2), based on the detected phase shift (Sp2) ) To determine a distance value (d), characterized in that the time-of-flight camera (20) has a memory which represents the time-of-flight camera (20) and the time of flightThe characteristics of the Point Spread Function (PSF) characteristic of the intermediate sensor (22) are stored in a memory; an evaluation unit which is designed to arrange the detected complex-valued image (I (x)) in Fourier space according to a stored point-spread function (PSF) and to determine a complex-valued image (I (x)) corrected by scattered light 0 (x) And using the corrected complex-valued image (I) 0 (x) Determining the phase shift (I) 0 (x) Or a distance value (d).)
1. A time-of-flight camera (20) for a time-of-flight camera system (1), comprising a time-of-flight sensor (22), the time-of-flight sensor (22) comprising a plurality of time-of-flight pixels (23) for determining a phase shift of the transmitted and received light (Sp2), wherein the phase shift is based on the detection
it is characterized in that the preparation method is characterized in that,
the time-of-flight camera (20) comprises a memory in which at least Parameters (PSF) of a point spread function are stored, wherein the Point Spread Function (PSF) takes into account scattered light behavior and signal cross-talk of the time-of-flight camera (20) and the time-of-flight sensor (22);
comprising an evaluation unit configured to cause a deconvolution of the detected image (I (x)) and to determine a corrected image (I) based on the stored Point Spread Function (PSF)0(x) And are) and
wherein from the corrected image (I)0(x) To determine the phase shift
2. Time-of-flight camera (20) according to claim 1, characterized in that the Point Spread Function (PSF) is complex valued.
3. The time-of-flight camera (20) according to any one of the preceding claims, characterized in that the deconvolution of the detected image (i (x)) and the stored Point Spread Function (PSF) is performed in fourier space.
4. Time of flight camera (20) according to any one of the preceding claims, characterized in that the resolution of the detected image (i (x)) is reduced and a correction (Δ i (x)) is determined with the reduced resolution, whereupon the correction (Δ i (x)) is scaled up to the original resolution of the detected image (i (x)) and the detected image (i (x)) is corrected with the scaled-up correction (Δ i (x)).
5. Time-of-flight camera (20) according to claim 4, characterized in that the reduction of resolution is achieved by averaging the amplitudes of adjacent pixels and the scaling up is performed by repetition of the amplitudes.
6. Time-of-flight camera (20) according to any one of the preceding claims, characterized in that the Point Spread Function (PSF) is stored in the memory as a matrix or a look-up table.
7. Time-of-flight camera (20) according to any one of the preceding claims, characterized in that the point-spread function (PSF) is stored in the memory as a fourier transform.
8. Time-of-flight camera (20) according to one of the preceding claims, characterized in that the Point Spread Function (PSF) is stored on an external device, the phase shift being
9. Time-of-flight camera (20) according to any one of the preceding claims, characterized in that the Point Spread Function (PSF) stored in the memory is determined according to the method of any one of claims 7 to 12.
10. A method of determining a point spread function, wherein a point light source (112) and a time-of-flight camera (20) are arranged such that a time-of-flight sensor (22) of the time-of-flight camera (20) detects the point light source (112),
wherein the distance between the point light source (112) and the time-of-flight camera (20) and/or the beam profile of the point light source (112) is selected such that less than 5 time-of-flight pixels (23) or at most 16 x 16 pixels are illuminated in a pixel row or column on the time-of-flight sensor (22),
wherein the Point Spread Function (PSF) is determined based on at least a subset of the time-of-flight pixels (23) of the time-of-flight sensor (22).
11. The method of claim 10 wherein the point light source operates unmodulated.
12. A method according to claim 11, characterized by driving the modulation gates (Gam, Gbm) of the time-of-flight pixels (23) of the time-of-flight sensor (22) such that carriers in the time-of-flight pixels (23) accumulate predominantly at only one integration node (Ga, Gb).
13. The method of claim 10, wherein the point light source (112) and the time-of-flight sensor (22) are driven in phase with a modulation signal and sensor difference signals for at least three different phase positions are determined.
14. Method according to any of the preceding methods, characterized in that at least two image frames with different integration times of the time-of-flight sensor (22) and/or different light intensities (112) of the point light source are detected to determine the point spread function.
Technical Field
The invention relates to a time-of-flight camera and a method for detecting a point spread function to correct a detected signal of a time-of-flight sensor.
Background
Time-of-flight cameras or time-of-flight camera systems relate in particular to all time-of-flight or 3D time-of-flight camera systems which derive time-of-flight information from the phase shift of the transmitted and received radiation. As described in DE19704496C2, time-of-flight or 3D time-of-flight cameras (in particular hybrid light detector cameras) are suitable since they comprise a hybrid light detector (PMD) and are available as Frame ribbon 03D or as camtube from the company "if electronics gmbh" or "pmdttechnology ag". The PMD camera in particular allows a flexible arrangement of the light source and the detector, both of which can be arranged in the housing and independently.
Disclosure of Invention
It is an object of the invention to further improve the compensation of phase errors.
The object is achieved in an advantageous manner by a time-of-flight camera system according to the invention, as set forth in the independent claims.
Of particular advantage is a time-of-flight camera for a time-of-flight camera system, provided with a time-of-flight sensor comprising a plurality of time-of-flight pixels for determining a phase shift of the emitted light and the received light, wherein a distance value is determined on the basis of the detected phase shift, wherein the time-of-flight camera comprises a memory in which at least parameters of a point spread function are stored, wherein the point spread function takes into account the scattered light behavior and the signal crosstalk of the time-of-flight camera and the time-of-flight sensor, comprising an evaluation unit which is designed such that the detected image (i (x)) is deconvoluted and the detected image (i (x)) is deconvoluted on the basis of the storedDetermining a corrected image (I)0(x) And wherein the determination of the phase shift or distance value is at the corrected image (I)0(x) ) on the basis of the received signal.
This step has the advantage that the distance values can be corrected during operation on the basis of the stored point spread function.
Preferably, the point spread function is a complex valued function.
This is also useful when deconvolution of the detected image and the stored point spread function is done in fourier space.
In another embodiment, it is assumed that the resolution of the detected image is reduced, and correction is determined by such reduced resolution, and thereafter, the detected image is corrected by being scaled up to the original resolution of the detected image and corrected by the scaled-up correction.
Therefore, the calculation amount of correction can be significantly reduced.
It is further contemplated that the reduction in resolution is performed by averaging the amplitudes of adjacent pixels and the scaling up is performed by repeating the amplitudes.
Preferably, the point spread function is stored in the memory in the form of a matrix or look-up table and/or a fourier transform.
It is particularly useful if the point spread function stored in the memory is determined according to one of the following methods.
Preferably, a method for determining a point spread function is provided, wherein the point light source and the time-of-flight camera are arranged such that the time-of-flight sensor of the time-of-flight camera detects the point light source, wherein the distance between the point light source and the time-of-flight camera and/or the beam profile of the point light source is selected such that less than 5 time-of-flight pixels or at most 16 × 16 pixels are illuminated in a pixel row or column, wherein the point spread function is determined based on at least a subset of the time-of-flight pixels of the time-of-flight sensor.
An advantage of this step is that the light source can be constructed in a simple manner within a certain range in order to determine the point spread function.
In one embodiment, operating an unmodulated point source is provided.
In this case, the modulation gates of the time-of-flight pixels of the time-of-flight sensor are driven such that the charge carriers in the time-of-flight pixels accumulate predominantly only at one integration node. This step ensures that the generated photoelectrons are preferably collected at an integration node.
According to another embodiment, driving the point source and the time-of-flight sensor in phase with the modulation signal is provided, and a sensor difference signal with respect to at least three different phase positions is determined.
It is particularly useful to provide at least two image frames with different integration times of the time-of-flight sensor and/or different light intensities of the point light source to determine the point spread function.
In another embodiment, a method for determining a point spread function of a time-of-flight camera system is provided, wherein a first 3D image I of a reference scene is detected by the time-of-flight camera1(x) And a second 3D image I with objects in the foreground of the reference scene2(x) Wherein the second 3D image I2(x) Or a second 3D image I2(x) Is corrected by a point spread function and is based on the first and corrected second 3D image I'2(x) The parameters of the point spread function are varied until at least two images (I) in selected partial areas are obtained1(x),I’2(x) ) is minimal and/or is below a threshold, wherein the resulting point spread function can be reused as a correction point spread function.
Likewise, a method for determining a point spread function of a time-of-flight camera system may be provided, wherein a single image I (x) of a reference scene with objects in the foreground is detected by the time-of-flight camera under the assumption that the reference scene is formed as a plane, wherein the single image I (x) is corrected by a first point spread function, wherein parameters of the first point spread function are changed to determine a corrected point spread function until a corrected image I '(x) and an expected image I' (x) are corrected0(x) The difference between is minimal and/or below a threshold.
In a further embodiment, a method for determining a point spread function of a time-of-flight camera system is provided, wherein a 3D image I of a step of a reference object is detected by the time-of-flight cameraT(x) Wherein a reference object having steps of a defined height, the surfaces of which are planar and arranged parallel to each other, is arranged relative to the time-of-flight camera such that at the edges of the steps there is a jump in distance to the more distant step level, wherein the detected 3D image IT(x) Is first corrected using a first model point spread function, wherein the 3D image I 'when so corrected'T(x) Is determined, the parameters of the model point spread function are changed until the corrected 3D image I 'when the distance value D of (D) exceeds the maximum allowable distance error'T(x) Is at a minimum and/or below an allowable distance error, wherein the resulting point spread function can be reused as a correction point spread function.
Drawings
The invention will be explained in more detail below by means of exemplary embodiments with reference to the drawings.
In the figure:
FIG. 1 schematically illustrates a time-of-flight camera system;
FIG. 2 shows modulation integrals of generated carriers;
FIG. 3 illustrates an arrangement for determining a point spread function;
FIG. 4 shows a cross-section of an image for determining a point spread function;
FIG. 5 illustrates detection of a reference scene;
FIG. 6 illustrates detection of an object in front of a reference scene;
FIG. 7 shows measured distance values versus actual distances according to FIG. 6;
FIG. 8 illustrates the detection of reference surfaces at two different distances;
FIG. 9 shows measured distance values versus actual distances according to FIG. 8; and
fig. 10 shows a possible schematic flow of scattered light correction in the sense of the present invention.
Detailed Description
In the following description of the preferred embodiments, like reference characters designate the same or similar components.
Fig. 1 shows a measurement of an optical distance with a time-of-flight camera, for example, as known from
The time-of-
The time-of-
The measuring principle of this arrangement is basically based on the fact that, based on the phase shift of the emitted and received light, the time of flight, and thus the distance traveled by the received light, can be determined. To this end,
In dependence on the adjusted modulation signal, the
Preferably, an infrared light emitting diode or surface emitter (VCSEL) is suitable as the light emitting source or
The basic principle of phase measurement is schematically illustrated by way of example in fig. 2. The upper curve shows modulation signal M driving
Fig. 3 schematically shows an arrangement for determining the point spread function PSF. Here,
For determining the PSF, it is preferred that the
If desired, the quality of the point spread function PSF may be improved if several point spread functions are determined based on a plurality of singular
Since the mentioned electron diffusion typically occurs at a diffusion rate that is significantly lower than the propagation of light, the electrons reach the neighboring pixels with a time delay, so that the effect of the electron diffusion can also be observed as a phase shift. Therefore, the point spread function PSF also includes a complex-valued fraction. Therefore, to more accurately determine these quantities, it is advantageous to operate the
Since the point spread function typically has a high dynamic over a few powers of ten, it is also advantageous to detect the PSF to operate
To compensate for dark current, it is beneficial to detect the image signal i (x) when the
From the sum of all measured values a model of the point spread function can be generated, which model is applicable to all
Such a model may be generated based on the following considerations: since the measured PSF is noisy and may contain artifacts that are e.g. very specific to the pixel position on the sensor, a "clean" PSF is obtained e.g. by fitting the measured PSF to a suitable model. For example, as a model
For example, wherein
May be selected.
In this case, the amount of the solvent to be used,
center pixel representing pixel-by-pixel and PSFIs determined by the distance vector of (a), to representP-norm of (d). For example, in a perfectly radially symmetric PSF, P-2 would result. Since the PSF does not have to be radially symmetrical, but can be, for example, diamond-shaped, better results are obtained with p ≠ 2. By appropriate selection of the p-norm, the anisotropy of the PSF can be taken into account.Since most of the light impinges on the central pixel of the PSF, it is beneficial to add a local narrow function b (r) to the model that reflects this fraction. This may be, for example, a Dirac Delta function or Gaussian function describing, for example, lens blur.
For efficiency reasons, it is beneficial to describe the PSF in the form of a spline curve. To describe the phase shift with this PSF, for example, the spline has a complex-valued score in addition to the actual score. This also makes the PSF complex. Then, the appropriate fitting parameters are, for example, the values at the spline nodes, the normalization parameters p and pBAnd a parameter specifying the shape of b (r). During software initialization, it is beneficial to store only the necessary parameters to generate the PSF from these parameters, rather than storing the entire PSF.
During operation of the time-of-flight camera, distance values relating to the influence of scattered light can be corrected on the basis of the stored parameters and the PSF generated thereby.
By means of the described arrangementPreferably with a short exposure time tkFirst image of
In particular, the exposure time should be chosen such that no pixel is in saturation. In the case of using modulated light, no pixels in the original image obtained may be saturated.In addition, it is also detected that there is a long exposure time tlSecond image of (2)
The exposure time should be selected such that the proportion of the PSF caused by scattered light and/or crosstalk/signal crosstalk is as completely visible as possible, i.e. is not influenced by noise. The exposure time is here typically 1000-10000 times greater than the first image.In the image detection process, either unmodulated light can be used or the light source and sensor can be modulated in the usual way. In the latter case, the image
Andis complex-valued as usual, and therefore contains phase information, which reflects the time from light emission to the reception of the generated electrons at the gate of the sensorFor both images, it may be useful to detect a series of images rather than one and average them to further reduce noise.
For example, to obtain a consistent value between the first and second images, the brightness (or amplitude) is normalized by the different integration times:
in the obtained image, the light-emitting center pixel
Is usually still unknown. To determine the central pixelFor the first image, e.g. by thresholdingBinarization is performed such that bright LED spots should result in a continuous area.The centre of the continuous region being a central pixel or point on the sensor directed towards the light source
Good assessment of. The center pointDoes not have to fall on the center of the pixel, i.e. the found center pointNeed not be an integer.Now short exposure imageFit to the model of the sharp point. Such a model is used, for example, in equation (1)
And (4) showing. In this context, in particular
Wherein P isBIs a parameter of the function B (r), pBIs a parameter of the norm. For example, B (r) ═ B may be selected0exp(-br2) Wherein PB is ═ B0,b)。
For numerical minimization according to equation (4), there are many algorithms, such as the Nold-Mead method.
Except that PBAnd pBBesides, if the center of the light source is to be
Included in the optimization, better results can be obtained in equation (4). Then, the value previously found from the binarized image will be suitable as the starting value.Now we consider a second image with a long exposure time
Similar to equation (4), the image is fitted to a model of the scattered light characteristics, in equation (1)
The central part of the PSF described by b (r) may not be considered if necessary.
Similar to the first fit, where PAIs a function of a modelThe parameter (c) of (c). For example, the function of the form:
has proven suitable, where s (r) denotes a (real) spline curve. Function(s)
The phase delay of the incident light spot is described, which may be caused for example by phase cross talk/signal cross talk between pixels. Since this is not necessarily isotropic, it may be desirable to align the lens with the lensModeling as two dimensionsInstead of assuming a radially symmetric function as used for s (r), a function (e.g., a two-dimensional spline or a two-dimensional look-up table) is used.In this case, the fitting parameter PAIs A0,pAAnd a function value of the spline at the node. If necessary, the position of the node may also be the fitting parameter PAA part of (a).
For example, by obtaining the parameter PAAnd PBAnd the PSF model according to equation (1), an artifact-free and noise-free PSF can now be generated. It may be beneficial to store only these or other suitable parameters from which the PSF may be generated during software initialization, rather than storing the complete PSF.
The images detected with different exposure times are preferably processed independently of one another:
sub-modelFor example, may correspond to different dynamic ranges of the PSF and may be fitted to the detected images independently of each other
Based on these fitting parameters, the PSF can be summarized according to equation (7).In the above, the calibration has been described with reference to a point light source with an aperture as light source or light source system. Of course, the calibration is not limited to such light sources, but takes into account all light sources or light source systems that are capable of producing suitable light spots.
Fig. 4 to 9 show a further method for determining a suitable point spread function PSF. In the method according to fig. 4, a first 3D image I of a reference scene is detected1(x) And a second 3D image I with an
For example, the image may be detected as shown in fig. 5 and 6. In a first step, a first 3D image I of a reference scene is detected1(x) (FIG. 5). For example, as a reference scene, a wall or floor can be detected in a simple manner, but in principle any scene of an arbitrary height profile can be detected. In a second step according to fig. 6, an object is arranged above the reference scene, for example a hand or another object, and a second range image I is detected2(x) In that respect Also, the characteristics of the object are substantially non-critical. As described with reference to fig. 4, a corrected PSF may then be generated based on the difference between the two images.
Fig. 7 shows a variant in which the reference scene and the object are flat and arranged parallel to each other. With this a priori knowledge, the optimization of the PSF may be simplified.
Alternatively, for example, only one image of the target may be detected at a distance sufficiently far from a flat reference scene or plane (e.g., wall, table, floor), instead of two images. In order to determine the PSF, the parameters are now changed until the reference surface behind the object is as flat as possible, or the deviation of the corrected reference surface from a plane is less than the permissible limit.
It is particularly advantageous if the size and distance of the reference scene and/or the introduced object are known in advance.
Fig. 8 and 9 show another variation of the above steps. The
The raw image measured by the sensor (e.g., j ═ 0, 1, 2, 3 corresponding to phase
of interest for further processing is a complex-valued image (8)
I(x):=(D0(x)-D2(x))+i(D1(x)-D3(x))(9)
Since convolution is a linear operation, it is equally applicable to images similar to I (x) and complex-valued images I (x) that are not distorted by scattered light0(x):
I(x)=ΣΔx|0(x-Δx)·PSF(Δx) (10)
Or I (x) ═ I0(x)·PSF(x) (11)
The deconvolution is performed in fourier space. For this purpose, the Fourier transforms (F [. cndot.) are performed on I (x) and PSF]):
Andtherefore, equation 4 becomes:
and thus
This keeps the image from being distorted by stray light
If the correction is equal to Δ I (x)0(x) I (x) the difference between the detected image and the corrected image, which is of interest, equation (13) can be rearranged as follows:
in which the treatment is similar to that described above
Is the fourier transform of the correction Δ i (x).For performance reasons before the fourier transformation, such a correction can, for example, scale down or reduce the resolution and can be scaled up again to the original resolution after the scattered light correction. The correction thus obtained may then be added to the detected image I (x) in order to obtain a corrected image I0(x)。
Of course, the computational effort is reduced due to the reduction of data.
List of reference numerals
1 time-of-flight camera system
10 light emitting module
12 light source
15 beam shaping optics
20 receiver, time-of-flight camera
30 modulator
35 phase shifter and luminescence phase shifter
40 objects
Time-of-flight dependent phase shift
Phase position
Base phase
M0Modulating signals
p1 first phase
p2 second phase
Sp1Transmitting signal with first phase
Sp2Received signal having a second phase
tLTime of flight, light propagation time
Ga, Gb integration node
d object distance
q charge
- 上一篇:一种医用注射器针头装配设备
- 下一篇:光扫描器及检测器