Apparatus, system, and method for locating a target in a scene

文档序号:1009245 发布日期:2020-10-23 浏览:19次 中文

阅读说明:本技术 用于定位场景中的目标的装置、系统及方法 (Apparatus, system, and method for locating a target in a scene ) 是由 马克西米利安·施泰纳 克里斯蒂安·瓦尔德施密特 于 2019-03-27 设计创作,主要内容包括:一种用于定位场景中的目标的装置(1,2),包括电路,该电路被配置成:获得由布置在不同位置处的两个或更多个雷达传感器(10-12)同时获取的雷达信号测量值,两个或更多个雷达传感器具有重叠视场,从在同时或在同一时间间隔期间获取的两个或更多个雷达传感器的雷达信号测量值的样本中导出一个或多个潜在目标的距离信息,单个样本的距离信息表示在相应雷达传感器的视场中距该雷达传感器特定距离处的潜在目标的潜在位置的环段,确定所导出的距离信息的环段的交点,确定场景的具有最高密度的交点中的一个的区域,选择穿过选定区域的每传感器的环段,并且根据所选环段的导出的距离信息来确定潜在目标的最可能的目标位置。(An apparatus (1, 2) for locating an object in a scene, comprising circuitry configured to: obtaining radar signal measurements acquired simultaneously by two or more radar sensors (10-12) arranged at different locations, the two or more radar sensors having overlapping fields of view, deriving range information for one or more potential targets from samples of the radar signal measurements of the two or more radar sensors acquired simultaneously or during the same time interval, the range information for a single sample representing a ring segment of the potential location of the potential target at a particular distance from the radar sensor in the field of view of the respective radar sensor, determining an intersection of the ring segments of the derived range information, determining a region of the scene having one of the intersections of the highest density, selecting the ring segment per sensor that passes through the selected region, and determining a most likely target location of the potential target from the derived distance information for the selected ring segment.)

1. An apparatus for locating an object in a scene, the apparatus comprising circuitry configured to:

obtaining radar signal measurements simultaneously acquired by two or more radar sensors arranged at different locations, two or more of the radar sensors having overlapping fields of view,

deriving range information for one or more potential targets from samples of the radar signal measurements of two or more of the radar sensors acquired simultaneously or during the same time interval, the range information for a single sample representing a ring segment of potential locations of potential targets at a particular distance from a respective radar sensor in the field of view of that respective radar sensor,

-determining intersections of ring segments of the derived distance information,

-determining a region of the scene having one of the intersections of the highest density,

-selecting a ring segment per sensor passing through the selected area, and

-determining a most likely target position of the potential target from the derived distance information of the selected ring segment.

2. The device according to claim 1, wherein the first and second electrodes are arranged in a single plane,

wherein the circuitry is further configured to iteratively determine the most likely target location based on different combinations of the ring segments, wherein a combination includes one ring segment per sensor that traverses the selected area, and each combination includes one or more ring segments that are different from one or more ring segments of other combinations.

3. The apparatus of claim 2, wherein the first and second electrodes are disposed on opposite sides of the substrate,

wherein the circuitry is further configured to determine a most likely target position from different combinations of ring segments by finding a position having a least squares radial distance that minimizes a minimization function.

4. The apparatus of claim 3, wherein the first and second electrodes are disposed in a common plane,

wherein the circuitry is further configured to use a sum of the two-times radial distances between the estimated target position and the respective range ring of the respective combination as a minimization function.

5. The device according to claim 1, wherein the first and second electrodes are arranged in a single plane,

wherein the circuitry is further configured to determine a speed of the potential target.

6. The device according to claim 1, wherein the first and second electrodes are arranged in a single plane,

wherein the circuitry is further configured to determine a direction of movement of the potential target.

7. The device according to claim 1, wherein the first and second electrodes are arranged in a single plane,

wherein the circuitry is further configured to determine the speed and/or direction of movement of the potential target by using the angle between the position of the sensor and the most likely target position and/or by using the relative speed measured by the sensor.

8. The device according to claim 1, wherein the first and second electrodes are arranged in a single plane,

wherein the circuitry is further configured to determine the speed and/or direction of movement of the potential target by minimizing the sum of the squares of the errors of the relative speeds.

9. The device according to claim 1, wherein the first and second electrodes are arranged in a single plane,

wherein the circuitry is further configured to use the relative velocity measured by the sensor to improve the determination of the most likely target position.

10. A radar system, comprising:

-two or more radar sensors arranged at different locations and having overlapping fields of view from a scene, the radar sensors being configured to simultaneously acquire radar signal measurements from a scene comprising one or more targets, and

-the apparatus of claim 1, for locating an object in the scene based on the acquired radar signal measurements.

11. A method for locating an object in a scene, the method comprising:

obtaining radar signal measurements simultaneously acquired by two or more radar sensors arranged at different locations, two or more of the radar sensors having overlapping fields of view of a scene,

deriving range information for one or more potential targets from samples of radar signal measurements of two or more of the radar sensors acquired simultaneously or during the same time interval, the range information for a single sample representing a ring segment of potential locations of potential targets at a particular distance from the radar sensor in the field of view of the respective radar sensor,

-determining intersections of ring segments of the derived distance information,

-determining a region of the scene having one of the intersections of the highest density,

-selecting a ring segment per sensor passing through the selected area, and

-determining a most likely target position of the potential target from the derived distance information of the selected ring segment.

12. A non-transitory computer-readable recording medium having stored therein a computer program product which, when executed by a processor, causes the method according to claim 11 to be performed.

Technical Field

The present disclosure relates to an apparatus, corresponding method and system for locating an object in a scene.

Background

Accurate positioning of targets within an environment such as a vehicle by radar requires, in particular, a high separation in distance and in angular domain in order to be able to distinguish between closely adjacent targets.

Radar sensors that utilize beam forming or beam steering of antenna patterns for target localization or imaging are widely used. Beamforming or beam steering may be achieved by electronic or mechanical movement. Electronic methods for beamforming combine signals from small antennas into an array pattern with higher directivity than a single antenna by the coherence of the signals of all antennas. The performance of such systems is mainly manifested in the separation of distance from angle. The total aperture of the antenna array determines the angular separation. The inter-antenna distance of the array needs to be less than half the free-space wavelength in order to achieve spatially unambiguous localization of the target with limited beam width. Due to this limitation, a certain number of antenna elements and signal processing channels are necessary to achieve the desired separability. The beamforming sensor can only cover a limited field of view. Therefore, a large number of these complex radar sensors are required to cover the environment of the object 360 °.

Another possibility for high-resolution target localization is joint data processing with multiple spatially distributed radar sensors. Thus, a high separation, in particular for close targets, can be achieved. For such systems, typical beamforming cannot be applied, since coherent coupling of spatially distributed sensors is very expensive. Thus, the individual sensors of the distributed system can be very simple and low cost compared to complex beamforming sensors, since no estimation of angular information is required. Thus, the number of signal processing channels (including antennas) can be reduced to a single channel per sensor at best. In practical situations, localization within a distributed sensor network by multipoint localization (localization) is often ambiguous since a limited number of sensors will be facing a large number of radar targets. This makes it desirable to have a more advanced method to reduce or avoid these ambiguities, which is accompanied by a multi-point localization algorithm.

The "background" description provided herein is for the purpose of generally presenting the context of the disclosure. Work of the inventor(s), and aspects of the description that may not otherwise qualify as prior art at the time of filing, are neither expressly nor impliedly admitted as prior art against the present disclosure, insofar as they are described in this background section.

Disclosure of Invention

It is an object of the present invention to provide an apparatus, a corresponding method and a system for locating an object in a scene with higher accuracy and less blurring.

According to an aspect, there is provided an apparatus for locating an object in a scene, comprising circuitry configured to:

obtaining radar signal measurements simultaneously acquired by two or more radar sensors arranged at different locations, the two or more radar sensors having overlapping fields of view,

deriving range information for one or more potential targets from samples of radar signal measurements of two or more radar sensors acquired at the same time or during the same time interval, the range information for a single sample representing a ring segment of potential locations of potential targets at a particular distance from the radar sensor in a field of view of the respective radar sensor,

-determining intersections of ring segments of the derived distance information,

-determining a region of the scene having one of the intersections of the highest density,

-selecting a ring segment per sensor passing through the selected area, and

-determining the most likely target position of the potential target from the derived distance information of the selected ring segment.

According to another aspect, a corresponding method for locating an object in a scene is provided.

According to yet another aspect, there is provided a radar system for locating a target in a scene, comprising:

-two or more radar sensors arranged at different locations and having overlapping fields of view from a scene, the radar sensors being configured to simultaneously acquire radar signal measurements from the scene comprising one or more targets, and

-positioning a target in a scene based on the acquired radar signal measurements, as disclosed herein.

According to still further aspects, there are provided: a computer program comprising program means for causing a computer to carry out the steps of the methods disclosed herein when said computer program is carried out on a computer; and a non-transitory computer-readable recording medium in which a computer program product is stored, which, when executed by a processor, causes the methods disclosed herein to be performed.

Embodiments are defined in the dependent claims. It is to be understood that the disclosed method, the disclosed system, the disclosed computer program, and the disclosed computer readable recording medium have other embodiments similar and/or identical to the claimed apparatus and those as defined in the dependent claims and/or disclosed herein.

One aspect of the invention is to make use of a system concept and a signal processing method that enables positioning of one or more (fixed or moving) radar targets in a view of a scene from (fixed or moving objects) equipped with two or more radar sensors. For example, one or more targets 360 ° around the object should be located, in particular in case of relative movement between the sensor used and the target. In practical applications, the object may be a vehicle (such as a car driving on a street or a robot moving in a factory), and the target may be other vehicles, people, buildings, machines, etc.

Multiple distributed single-channel radar sensors may be used instead of a single multi-channel sensor. This allows a large spacing between the individual sensors. Thus, the ambiguous target location can be estimated by a multi-point localization algorithm. The problem of ambiguity can also be solved in embodiments by jointly estimating the distance and velocity information provided by each individual sensor.

The preceding paragraphs have been provided by way of general introduction and are not intended to limit the scope of the appended claims. The described embodiments, together with further advantages, will be best understood by reference to the following detailed description taken in conjunction with the accompanying drawings.

Drawings

A more complete understanding of the present disclosure and many of the attendant advantages of the present disclosure will be readily obtained as the same becomes better understood by reference to the following detailed description when considered in connection with the accompanying drawings, wherein:

FIG. 1 shows a diagram illustrating an exemplary application scenario of the apparatus, system, and method of the present disclosure;

FIG. 2 shows a schematic diagram of a first embodiment of a system according to the present disclosure;

FIG. 3 shows a schematic diagram of a second embodiment of a system according to the present disclosure;

FIG. 4 shows a flow diagram of an embodiment of a method according to the present disclosure;

FIG. 5 shows a diagram illustrating an exemplary scenario with four stationary radar sensors and four moving targets;

FIG. 6 shows a graph illustrating the relationship between target position at three sensors and measured relative velocity;

FIG. 7A shows a graph illustrating the relationship between normalized radial velocity and distance in the direction of movement of the vehicle;

FIG. 7B shows a diagram illustrating a snapshot of a scene with a sensor moving forward and a resulting velocity vector projected to a possible target location;

FIG. 8 shows a graph illustrating the velocity relationship between measured relative velocity and target movement for a stationary sensor;

FIG. 9 shows a diagram illustrating an exemplary scenario with three stationary sensors and three moving targets;

FIG. 10 shows a graph illustrating the velocity relationship between measured relative velocity and target movement for a motion sensor and a moving target;

FIG. 11 shows a diagram illustrating an exemplary scene with non-ideal distance measurements around a single target;

FIG. 12 shows a graph illustrating distance ring cross density;

FIG. 13 shows a diagram illustrating an exemplary scenario with four radar sensors and a separate range ring caused by four moving targets;

FIG. 14 shows a graph illustrating four calculated intersection densities for each step of the coarse estimation;

fig. 15 shows a diagram illustrating the observation region O surrounding the high-density intersection region;

FIG. 16 shows a diagram illustrating multiple combinations of different distance rings;

FIG. 17 illustrates a depiction of the measured distances of three sensors as distance ring segments and the estimated target position as an optimization result; and

FIG. 18 shows a diagram illustrating the actual movement of an object, which represents the speed and direction of movement.

Detailed Description

Referring now to the drawings, in which like reference numerals designate identical or corresponding parts throughout the several views, fig. 1 shows a diagram illustrating an exemplary application scenario of the disclosed apparatus and method. The disclosed system generally includes a plurality of (at least two) individual radar sensors arranged around the contour of an object such that a majority of the respective radiation patterns overlap. In the scenario shown in fig. 1, multiple radar sensors are arranged around the car such that the patterns of the individual antennas overlap on three (or four) sides of the car. These single radar sensors have extremely low complexity and therefore their ability is limited to determining the range and relative speed of the target. Due to the small number of channels or antennas, there is only a basic or even no possibility of angle estimation by a single radar sensor compared to radar sensors with array antennas, however, the use of array antennas should be avoided or not available according to the present disclosure.

Each sensor performs radar measurements independently of the other sensors so that no direct phase synchronization is required between the sensors. The exact time of the measurement may be determined by an external trigger signal or may be otherwise known with high accuracy. The control and configuration may be performed by a central unit 20, as shown in fig. 2 illustrating a first embodiment of the system 1 according to the present disclosure, comprising n (here three) radar sensors 10, 11, 12 and a central unit 20, which embodiment represents or comprises the positioning means disclosed herein for positioning objects in a scene. The central unit 20 may be implemented in software, or in hardware, or in a combination of software and hardware, for example by means of a computer program or application running on a corresponding programmable computer or processor. The central unit may not only perform the processing of the acquired radar signals, but may also perform control and configuration operations.

Fig. 3 shows a schematic diagram of a second embodiment of a system 2 according to the present disclosure, in which tasks of control and configuration operations are performed in a control unit 21 and tasks of signal processing of radar signals are performed in a processing unit 22, which embodiment represents or comprises the positioning means disclosed herein for positioning objects in a scene. Both the control unit 21 and the processing unit 22 may be implemented in software and/or hardware. The acquired raw data of each individual sensor 10, 11, 12 can thus be passed on to the central processing unit 20 or 22, respectively, directly or after preprocessing.

The signal processing utilizes a multi-point positioning method that uses the measured distances, optionally in combination with a method that uses the measured relative velocity between the sensor and the target to estimate the position of the target (angle and distance relative to the sensor). Each sensor can estimate the relative velocity due to the doppler shift of the reflected signal. For a common target, this information, as well as the distance of the target, is different for each specific sensor. This enables the angle of the target relative to the sensor baseline to be deduced, due to the correlation of different relative velocities and distances between the target and each particular sensor. Furthermore, with the aid of the possible large spacing between the sensors (covering a common object), the movement of the object can be estimated within a single measurement cycle.

Basically four different scenarios can be envisaged:

no movement in the scene;

only the sensor platform is moving;

only a single object moves within the scene; and

the sensor platform and the single target move arbitrarily.

Fig. 4 shows a flow diagram of an embodiment of a method according to the present disclosure. For this embodiment, it is assumed that there is a moving target and/or a moving sensor platform. The data stream can be divided into three layers:

1. data acquisition and preprocessing 100: data of at least three radar sensors covering a common field of view are simultaneously sampled (S100). In the case of chirp-sequence radar, the data set includes time-domain samples from which range and velocity information of radar reflections within the field of view can be estimated, for example, by two fourier transforms (S101). A subsequent target extraction algorithm (CFAR-constant false alarm rate) may be used to reduce the amount of data to be transferred to a certain number of radar targets (S102).

2. The positioning algorithm 110:

a. in a first step (S110), the detection ranges of all individual sensors are linked together by means of a double measurement (bilateration). The range information for each radar target results in a ring of ambiguous locations around a particular sensor location, and the intersection of two rings of different sensors results in an actual target location candidate with low ambiguity. The additional intersection of the distance rings of different target distances results in the intersection point being in the wrong position.

b. These pairs of intersections of all range rings are accumulated (S111) into a common grid to determine clusters having a high density of intersections. Therefore, the copies of the cross matrix are shifted from each other and accumulated.

c. After grid-based accumulation, the highest cross density cell is found (S112), and all distance rings passing through some confidence region around the maximum density cell are selected for further processing.

d. The most likely target position is iteratively found (S113, S115) taking into account all possible combinations of range rings of the sensors involved. Accordingly, the distance information is supplemented with the speed information associated with each distance ring and the estimated most likely target position (S114).

e. After the positioning is successful, the distance ring associated with the target position is removed from the data set (S116), and the data set is fed back to step c. Here, a new density maximum of the intersection distribution is selected and further target positions are iteratively extracted.

3. Output (120): after all possible targets are found, the algorithm stops (S120). Thus, the position, moving speed, and direction of each target can be estimated.

The advantage of using distributed sensors within the described concept compared to a single radar sensor based on phased array antennas is that the positioning is accurate due to the large spacing possible. The actual scene directly affects the positioning accuracy. In particular, the number of targets, relative speed, and direction of movement have an impact on performance.

Failure of a single or multiple sensors does not necessarily lead to an overall failure of the system, but merely to a performance degradation with respect to positioning accuracy, detection probability or limitation of the field of view.

The measured relative velocity between the target and each sensor varies depending on the possible wide distribution of sensors. This allows improved positioning by correlation of speed and distance information and enables determination of the speed and direction of movement of the target within a single measurement cycle. Thus, in contrast to a single sensor with an array antenna, according to this concept there is no need to track the target in multiple measurement cycles.

Further details of the steps of the disclosed methods and embodiments of the disclosed devices, systems, and methods will be provided below.

According to an embodiment, a network of incoherent single channel radar sensor nodes is utilized to estimate the position and motion of multiple targets. Thus, simultaneous snapshot measurements of sensor nodes covering a common field of view are estimated for a single algorithm run. Each individual sensor performs radar measurements independently of the other sensors, so that no direct phase synchronization is required between the sensors. The exact time of the measurement is determined by an external trigger signal or otherwise known with high accuracy. The control and configuration may be performed by a central unit. The raw data obtained for each individual sensor is passed to a central processor, either directly or after pre-processing.

Automotive radar scenes show a large number of targets distributed throughout the field of view. Therefore, a positioning method based on only radial distance information may generate ambiguity. An example of this is given in fig. 5, which fig. 5 shows a simulated view of an extended target comprising 20 point scatterers with all possible pairs of intersection points detected by three sensors S1, S2, S3. By the combined processing of the distance information and the radial velocity information, the problem of target position ambiguity in a scene with movement can be solved, and the positioning accuracy can be improved.

Moving objects in the scene cause doppler shifts in the frequency domain, which are measured by the radar. The doppler shift corresponds to the velocity relative to the radar. An automotive scene can be divided into three different cases with respect to its movement:

1. sensor velocity vego>0 moves and the target is fixed.

2. With fixed sensor and target at speed vtar>0 is moved.

3. Sensor and target at velocity vego>0 and vtar>0 is moved.

These cases will be considered below.

First, the first case of a motion sensor should be considered. Normal movement of a vehicle equipped with sensors results in a relative velocity, which is measured by the radar sensor. The measured relative speed of a stationary target depends on the angle at which the target appears relative to the direction of motion. Due to the spatial distribution of the sensors, these relative velocities differ as a function of the angle between the common target location and each respective sensor. The relationship between object-sensor angle, relative velocity, and actual movement needs to satisfy the talles' theorem. Thus, as shown in fig. 6, these relationships can be illustrated by a circle whose diameter is determined by the actual velocity for the different target positions shown in fig. 6A to 6C. The velocity vector V is associated with the common motion of the sensors S1-S3 even though it is plotted at the target position T, which is a common origin of relative velocity. Assuming that all sensors undergo movement in the same direction and that all observations are directed to a common target, the vectors V1, V2, V3 represent the angular resolution of the actual sensor motion and must end up on a circle relative to the angle between the target T and each sensor S1-S3. The plotted vector is the inverse of the sensor measurement.

This principle is also depicted in fig. 7 for a single sensor that moves linearly along the y-axis. This may be the case for mobile vehicles equipped with radar sensors and different stationary objects. If the own speed of the vehicle carrying the sensor is known, the measured relative speed of the stationary target can be used to determine the angle between the direction of movement and the target position. Giving a dependence on self-velocity (v)ego) Angle (theta) of (d) and measured relative velocity (v)rel) The relationship of (1):

in fig. 7A, a graph of the distance of the target angle with the direction of movement is given for different target positions, showing that different target angles can be clearly separated due to the movement. Specifically, the relationship between the normalized radial velocity measured by the radar and the distance in the direction of movement of the vehicle is depicted in fig. 7A. Fig. 7B shows a snapshot of a scene with a sensor S moving forward and the resulting velocity vectors projected to the location of possible targets. A single radar sensor allows the estimation of the ambiguity angle of the target position. This blurring occurs where it is symmetrical to the axis of movement, as the relative velocity does not allow a clear distinction here. One remedy to this limitation is to use a plurality of widely distributed radar sensors to achieve a clearer estimate of the target position.

Next, a second case as a target of the movement is considered. In contrast to the first case, it is assumed here that the sensor is fixed and the object is moving. The velocity relationship between the measured relative velocity and the movement of the target is as described in fig. 8. It can be observed that a single sensor is not sufficient to determine the speed of the target and its direction of movement, but the different relative speeds provided by the spatially distributed sensors enable instantaneous estimation of the actual target movement.

FIG. 9 depicts an exemplary scenario with three fixed sensors and three moving objects. The figure shows an ideal multi-point positioning of three moving targets T1-T3 with three sensors S1-S3. At each target position, the actual velocities indicated by arrows V1, V2, V3, and the resulting radial velocities (indicated by the other arrows) are plotted with respect to the respective sensors. In addition, the cross marks all intersections between two distance rings. There is no measurement error in the example shown in fig. 9.

Next, a third case is considered as an object of movement and a sensor of movement. A third case includes movement of the sensor and movement of the target, which are superimposed in view of the measurements. An exemplary description of this behavior is given in fig. 10. Here, the sensor moves vegoWith the transposing speed of the target vtarSo as to derive v measured by the sensorres. In this case, the velocity vresCan be determined by simultaneous measurement with at least three spatially distributed sensors as shown in fig. 6.

The actual target movement (direction and speed) can be determined by additionally using the sensor's own motion (ego-motion). This information may be obtained from other systems or sensors (e.g., wheel speed sensors or odometers) built into the vehicle. This information can also be derived from fixed targets as they provide a collective velocity behavior reflecting the actual motion of the car. This method is related to the second case explained above with reference to fig. 8.

The disclosed concepts may utilize a plurality of spatially distributed radar sensor nodes for mid-range sensing applications in situations involving relative movement between the sensor and a target. In particular, at least two radar sensors that are spatially distributed and loosely coupled may be utilized. Each sensor performs radar measurements independently, resulting in range and velocity information for the detected target. Simultaneous measurements are assumed so that all sensors observe the target at the same time.

The multi-point positioning technology realizes the positioning of a target by using distance information measured by a plurality of sensor nodes. Therefore, it is assumed that there is a common scatter point and that the intersection of all range rings is required. However, in practical situations, the object is likely to expand, which results in a distribution of multiple scattering points on the contour of the object, rather than a common scattering point. Thus, no more than two distance circles pass through a single intersection. This behavior will change as the spatial distance between sensor nodes changes due to the difference in the angle at which the target is incident.

FIG. 11 depicts an exemplary scenario with non-ideal distance measurements around a single target T. This figure 11 shows how the pairs of intersections of the distance rings R1-R4 are spread around the target location and the additional "ghost" intersections. Assuming a single reflection per sensor at the target, the number of points found around a single target location:

determined by the number of sensor nodes M. Additional intersections occur at different locations where there may be no target.

Thus, the number of targets T determines the total number of intersections

Figure BDA0002658367750000112

In the case where there are many more targets than sensors, the number of intersections that do not represent the target position becomes dominant, resulting in a blurred target position of clusters such as intersections.

Embodiments of the disclosed algorithm utilize distance and relative velocity information collected by sensor nodes to estimate position, absolute velocity of movement, and direction. The flow chart shown in fig. 4 describes the successive process steps, which will now be explained in more detail.

The sensor nodes may operate with a chirp modulation scheme that allows for measurement of range and RF signal doppler shift. And (4) processing the time domain data through two-dimensional Fourier transform to obtain distance and speed data. From these data, targets are extracted by the CFAR algorithm so that a list of relative velocities to which the detected targets correspond is available for each sensor node.

The following description is done in two dimensions. In view of the single sensor data, the detected target range is blurred on a circle around the sensor position with the detection distance as a radius. In the first step (S110 "distance circle crossing") of the joint data processing (110), a least square technique (correlation technique) is used. Thus, the pair of cross-sectional points:

Figure BDA0002658367750000113

calculating between two circles having different center points

Figure BDA0002658367750000114

For riAnd rjThe distance of (c). Thus, the distance | P between two sensor nodesiPjCan count |Is counted as

And angle

Figure BDA0002658367750000122

Between the node connecting line and the intersection point.

The two points are calculated as

x1,2=Pi,x+ri·cos(α1,2) (1.9)

y1,2=Pi,y+ri·sin(α1,2) (1.10)

Overlapping range circles present two different intersection points, while two tangent circles result in a single intersection point.

If the target is detected by M sensor nodes, the number of intersections per target is ntGiven by equation 1.2. Therefore, it is necessary to find n having the most probable relationship with the same targettThe intersections serve as starting points for the iterative algorithm. For this purpose, a two-dimensional spatial density distribution of the pair-wise intersection points is determined.

This may be done, for example, by accumulating (step S111) the intersections to a plurality of grids that are spatially offset, and then merging them. The size of the grid cells must be chosen much larger than the distance resolution performed by the sensor nodes. To circumvent the limitation that the accumulation considers only points located within the boundaries of the grid cells, the accumulation may be done at grids that are spatially shifted by half the grid size in the x and y dimensions. FIG. 12 depicts a density distribution of distance ring (also referred to as ring segment) intersections from the exemplary scenario depicted in FIG. 13.

Thus, FIG. 12 depicts the distance ring cross density overlaid to FIG. 13. A plurality of distance rings pass through the observation area O surrounding the actual maximum density area D. FIG. 13 depicts an exemplary scenario with four radar sensors S1-S4 and a separate range ring caused by four moving targets T1-T4 that account for range measurement errors. The arrows M1-M4 at the location of each target indicate the direction and speed of movement by the length of the arrows.

Target detection based on Constant False Alarm Rate (CFAR) or peak detection alone may result in a false estimation of the target location. In order to improve the robustness of the positioning of the moving object, the proposed algorithm is divided into a coarse position estimation and a subsequent iterative error minimization. The purpose of the rough estimation step is to select (S112) all range rings that may belong to a single target. This is achieved by the following steps:

a) the highest intersection density is estimated, and

b) with respect to the calculated ntThe minimum error of the mapping between the emphasis of the individual intersections and the associated velocity vector estimates all the range rings associated with the intersections in the selected region.

With respect to the first step a), the highest density in the actual density map is estimated. In a single target scene with M ≧ 3 sensors, the appropriate grid region at the target location has the highest density in any case, while the ambiguous intersection appears as a lower density region. In a multi-target scenario, a single grid cell may include a combination of intersections associated with a single target located in the grid area, multiple targets located in the grid area, or fuzzy intersections of target(s) and targets located in other grid areas.

For the rough estimation, the highest density grid cell is considered and the distance is calculated for each node.

FIG. 14A depicts an exemplary accumulation grid. Using the rough estimate Cpos≈All range rings passing nearby are considered for further estimation. To limit the number of distance sensor pairs to be estimated, only the distance ring passing within radius B is retracted:

Figure BDA0002658367750000132

for a scenario with four sensors and three targets, this behavior is shown in fig. 15. The optimal size of B (the radius of the observation area O, e.g. 0.5m in the example) depends on the spatial arrangement of the sensor nodes, the radar parameters, and the choice of cell size. SiZ1Representing an object Z1And a sensor node SiI.e. the radius of the corresponding distance ring, or in other words, this is the target distance measured at the sensor. SiZlRepresenting all sensor target pairs.

For too small an observation area O (i.e. too small a radius B), not all distance rings belonging to the same object are omitted from further processing stages. An excessively large radius B results in a high number of distance rings that can be taken into account, thereby increasing the required computation time. B may be adaptively adjusted during the running of the algorithm.

Accurate estimation of target position and velocity is described below. After the target is found and the corresponding set of range rings has been removed, a new rough estimation step is performed. FIG. 14 shows four calculated intersection densities for each step of the coarse estimation relative to the scenario shown in FIG. 13. In particular, fig. 14 shows the cumulative grid at different successive iteration steps, for C in the next iteration steppos≈The true target position is indicated by the fork (x) and the chosen maximum is indicated by the rectangle (□).

In the next step (S113), the target position is estimated for all combinations of different range rings passing through a circular area with radius B. A subset of the possible combinations is shown in fig. 16.

Minimizing a function of points having a combined least squares radial distance from a processed node-distance

This is therefore the most likely target position. A solution to the minimization problem can be found using a gradient method.

The least squares radial distance is the error distance between the estimated target location and the corresponding distance measurement for each sensor. (1.13) represents the corresponding error function. The function represents the sum of squared distances between the estimated target position P (X, Y) and the combined corresponding range ring. In other words, the measured distance ring needs to be increased by this value to cross at the common point P (X, Y).

For each set of range rings with n sensors, the function is estimated and the set with the lowest error is used. FIG. 17 is a depiction of the measured distances of three sensors as distance ring segments and the estimated target position as a result of optimization. The difference in range ring diameter from the estimated target position is described by d1, d2, d 3. The resulting target estimate is used in a further step to calculate the resulting velocity.

As previously mentioned, the measurements of each sensor also give distance and velocity (doppler) information including the relative direction of movement. The relative speed between the target and the distributed sensor is different (see fig. 6).

Hereinafter, the estimation of the moving speed and direction of the target (S114) is described in detail. (1.14) represents a function of the x and y components of these velocities used to calculate the possible target positions. (1.15) the expected relative velocity measured for a certain target position is given. The desired relative velocity is also divided into x and y components. This allows comparing the expected value with the measured value by the error value calculated in (1.17).

(1.17) calculating the error between the expected value in (1.16) and the measured speed. Here, euclidean distances are used. This is the square root of the sum of squared differences between the expected value and the measured value. Finally, (1.18) represents the sum of the squares of the velocity differences for all sensor positions representing the error function g.

With knowledge of the sensor/vehicle movement and possible target positions (e.g. angles), the expected relative velocity of the fixed target at a certain angle can be calculated.

In detail, it may be based on the position of the targetThe estimation of the moving speed and direction of the target is completed by appropriate estimation (S114). The error of the proposed velocity estimation is also used as a criterion for setting the range ring for target position selection. As described above, the target motion V at the true target positionZIncluding relative velocity measured by spatially distributed sensors

Figure BDA0002658367750000151

Using the angle between the sensor position and the target position P to

Figure BDA0002658367750000152

These velocities can resolve the x and y components into

Wherein, the symbol

Figure BDA0002658367750000154

Indicating whether the target and sensor are far from or near each other. Vice versa in the direction of movement

Figure BDA0002658367750000155

A certain target speed ofThe corresponding relative speed is directly related to the spatial arrangement of the sensor and the target. It can be expressed as

Wherein the content of the first and second substances,indicating the direction of motion of the object. The calculated relative velocity may also be represented by the x and y components as

These calculated relative speeds and the measured relative speeds can be compared in a rectangular coordinate system by calculating the speed deviation

Figure BDA00026583677500001510

The sum of the squares of all errors in relative velocity results in a function

The function is for a target velocity corresponding to the measured radial velocity

Figure BDA0002658367750000162

And direction of movementIs the minimum value.

Fig. 18 shows an expanded part of fig. 11, which fig. 11 shows the actual movement M of the object representing the speed and direction of movement. The velocities V1-V4 measured by the four sensors are depicted by additional arrows.

The method discussed above is divided into two parts, the first part to estimate the likely target location and the second part to estimate the matching target movement. In contrast, the information provided by the measured relative velocity may also be used to improve the estimation of the target position. This is achieved by combining the functions from equations (1.18) and (1.13) into a single function

Figure BDA0002658367750000164

This function expresses a four-dimensional optimization problem. By maximum measuring distance SRAnd maximum measurement speed | SVNormalized by the square of the distance resolution Δ Rmin 2Squared Δ v with velocity resolutionr 2Adjust the weighting as a result

The results from equations (1.18) and (1.13) need to be set as seeds to solve the multi-modal optimization problem.

Both the measured distance information and the target information may be used to calculate the fuzzy target location. Since both are coupled to the target location, combining the error functions of (1.18) and (1.13) into a single function enables minimizing the errors of both distance and velocity measurements. This results in an improved target localization.

As depicted in fig. 18, in the absence of sensor movement, the estimated velocity directly represents object movement. On the other hand, if only the sensor is moving, the estimated movement represents the movement of the sensor. If there is target movement and sensor movement, the estimated velocity represents a superposition of the measurements of the two. Thus, the actual target movement can be estimated only if the sensor movement is known. This information is typically available from wheel speed sensors and inertial measurement units of modern vehicles. The self-motion can also be derived from fixed targets, or from specific fixed targets that are known to be fixed due to their location or due to the collective behavior of the estimated movement that is valid for all fixed targets.

The foregoing discussion discloses and describes merely exemplary embodiments of the present disclosure. As will be understood by those skilled in the art, the present disclosure may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. Accordingly, the disclosure of the present disclosure is intended to be illustrative, but not limiting, of the scope of the disclosure and the other claims. A disclosure including any readily identifiable variations of the teachings herein defines, in part, the scope of the preceding claim terms, such that no inventive subject matter is dedicated to the public.

In the claims, the word "comprising" does not exclude other elements or steps, and the indefinite article "a" or "an" does not exclude a plurality. A single element or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.

To the extent that embodiments of the present disclosure have been described as being implemented at least in part by a software-controlled data processing device, it will be understood that a non-transitory machine-readable medium (such as an optical disk, magnetic disk, semiconductor memory, etc.) carrying such software is also considered to represent embodiments of the present disclosure. Further, the software may also be distributed in other forms, such as via the internet or other wired or wireless telecommunication systems.

Elements of the disclosed apparatus, devices, and systems may be implemented by corresponding hardware and/or software elements, such as application specific circuits. A circuit is a structural combination of electronic components including conventional circuit elements, integrated circuits including application specific integrated circuits, standard integrated circuits, application specific standard products, and field programmable gate arrays. Further, the circuitry includes a central processing unit, a graphics processing unit, and a microprocessor programmed or configured according to software code. Although the circuitry includes hardware to execute software as described above, the circuitry does not include pure software.

The following is a list of further embodiments of the disclosed subject matter:

1. an apparatus for locating an object in a scene, comprising circuitry configured to:

obtaining radar signal measurements simultaneously acquired by two or more radar sensors arranged at different locations, the two or more radar sensors having overlapping fields of view,

deriving range information for one or more potential targets from samples of radar signal measurements of two or more radar sensors acquired at the same time or during the same time interval, the range information of a single sample representing a ring segment of potential positions of a potential target at a certain distance of the field of view of the respective radar sensor from the respective radar sensor,

-determining intersections of ring segments of the derived distance information,

-determining a region of the scene having one of the intersections of the highest density,

-selecting a ring segment per sensor passing through the selected area, and

-determining the most likely target position of the potential target from the distance information derived from the selected ring segments.

2. According to the apparatus as defined in example 1,

wherein the circuitry is further configured to iteratively determine the most likely target location based on different combinations of ring segments, wherein a combination includes one ring segment per sensor traversing the selected area, and each combination includes one or more ring segments that are different from one or more ring segments of other combinations.

3. According to the apparatus as defined in embodiment 2,

wherein the circuitry is further configured to determine a most likely target position from the different combinations of ring segments by finding a position having a least squares radial distance that minimizes a minimization function.

4. According to the apparatus as defined in embodiment 3,

wherein the circuitry is further configured to use a sum of the two-times radial distances between the estimated target position and the respective range ring of the respective combination as a minimization function.

5. The apparatus as defined in any of the preceding embodiments,

wherein the circuitry is further configured to determine a speed of the potential target.

6. The apparatus as defined in any of the preceding embodiments,

wherein the circuitry is further configured to determine a direction of movement of the potential target.

7. The apparatus as defined in any of the preceding embodiments,

wherein the circuitry is further configured to determine the speed and/or direction of movement of the potential target by using the angle between the position of the sensor and the most likely target position and/or by using the relative speed measured by the sensor.

8. The apparatus as defined in any of the preceding embodiments,

wherein the circuitry is further configured to determine the speed and/or direction of movement of the potential target by minimizing the sum of the squares of the errors in the relative speeds.

9. The apparatus as defined in any of the preceding embodiments,

wherein the circuitry is further configured to use the relative velocity measured by the sensor to improve the determination of the most likely target position.

10. A radar system, comprising:

-two or more radar sensors arranged at different locations and having overlapping fields of view from a scene, the radar sensors being configured to simultaneously acquire radar signal measurements from the scene comprising one or more targets, and

-an apparatus according to any of embodiments 1-9 for locating an object in a scene based on acquired radar signal measurements.

11. A method for locating an object in a scene, the method comprising:

obtaining radar signal measurements simultaneously acquired by two or more radar sensors arranged at different locations, the two or more radar sensors having overlapping fields of view,

deriving range information for one or more potential targets from samples of radar signal measurements of two or more radar sensors acquired at the same time or during the same time interval, the range information of a single sample representing a ring segment of potential locations of potential targets at a particular distance from a respective radar sensor in a field of view of the respective radar sensor,

-determining intersections of ring segments of the derived distance information,

-determining a region of the scene having one of the intersections of the highest density,

-selecting a ring segment of each sensor passing through the selected area, and

-determining a most likely target position of the potential target from the derived distance information of the selected ring segment.

12. A non-transitory computer-readable recording medium having stored therein a computer program product, which, when executed by a processor, causes the method according to embodiment 11 to be performed.

13. A computer program comprising program code means for causing a computer to carry out the steps of the method as claimed in embodiment 11 when said computer program is carried out on a computer.

27页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:波传播和散射参数的估计方法和仪器

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类