Radar view-limited scene recognition method, storage medium and vehicle-mounted equipment

文档序号:1814709 发布日期:2021-11-09 浏览:14次 中文

阅读说明:本技术 雷达视野受限场景识别方法、存储介质及车载设备 (Radar view-limited scene recognition method, storage medium and vehicle-mounted equipment ) 是由 陈丽 罗贤平 于 2021-06-30 设计创作,主要内容包括:本发明涉及一种雷达视野受限场景识别方法,包括对全视野受限场景的识别步骤,包括:基于雷达回波得到目标检测频谱信息;依据移动设备的速度信息及雷达的参数指标,计算绝对静止的目标的检测频谱信息的幅度骤降点H;计算所述幅度骤降点H前、后预设距离段内的幅度分布差异;当所述幅度分布差异大于或等于预设阈值时,判断当前雷达所处环境为全视野受限场景。本发明还提供了一种存储介质及车载设备。本发明的雷达视野受限场景识别方法适宜应用于装载在可移动设备上的雷达,充分考虑了全视野受限场景可能存储的情况,有效地实现了雷达对自身视野受限场景情况进行自检测,改善了雷达系统自诊断功能的准确性,且满足雷达诊断应用的实时性。(The invention relates to a radar view-limited scene recognition method, which comprises the steps of recognizing a full view-limited scene, and comprises the following steps: obtaining target detection frequency spectrum information based on the radar echo; calculating an amplitude dip point H of the detection frequency spectrum information of the absolutely static target according to the speed information of the mobile equipment and the parameter index of the radar; calculating the amplitude distribution difference in the preset distance section before and after the amplitude dip point H; and when the amplitude distribution difference is larger than or equal to a preset threshold value, judging that the environment where the current radar is located is a full-view-limited scene. The invention also provides a storage medium and vehicle-mounted equipment. The radar view-limited scene recognition method is suitable for radars loaded on movable equipment, fully considers the possible storage condition of the full view-limited scene, effectively realizes the self-detection of the radar to the view-limited scene condition, improves the accuracy of the self-diagnosis function of a radar system, and meets the real-time performance of radar diagnosis application.)

1. A radar view-limited scene recognition method applied to a radar on a mobile device, wherein the radar view-limited scene recognition method includes a full view-limited scene recognition step S10, and the step S10 includes:

s101, obtaining target detection frequency spectrum information based on radar echo;

s102, extracting detection frequency spectrum information of an absolute static target in the target detection frequency spectrum information according to the moving speed of the mobile equipment and the parameter index of the radar;

s103, calculating an amplitude dip point H of the detection frequency spectrum information of the absolute stationary target;

s104, calculating the amplitude distribution difference in the preset distance section before and after the amplitude dip point H; and

and S105, when the amplitude distribution difference is larger than or equal to a preset threshold value, judging that the current radar environment is a full-view-limited scene, otherwise, judging that the current radar environment is a common non-view-limited scene.

2. The method for identifying a scene with limited radar field of view according to claim 1, wherein after the radar is started, whether the radar is in an occlusion state is determined, and when the determination is yes, the step S10 is executed, and when the environment where the radar is currently located is a normal non-field-of-view limited scene as a result of the step S10, an occlusion alarm signal is output, and when the environment where the radar is currently located is a full-field-of-view limited scene as a result of the step S10, the radar is not occluded, and the detection is completed.

3. The method for recognizing the radar view-limited scene as claimed in claim 1, wherein after the radar is started, the step S10 is executed first, and when the current environment where the radar is located is a normal non-view-limited scene, the radar occlusion detection function is executed, otherwise, the radar occlusion detection function is cancelled.

4. The radar view limited scene recognition method according to any one of claims 1 to 3, wherein after two-dimensional Fourier transform is performed on echoes of each channel of the radar, coherent or non-coherent accumulation is performed on Fourier transform results of each channel to obtain target detection spectrum information, and then a spectrum information position index corresponding to an environment absolute stationary target in the target detection spectrum information is calculated according to the moving speed and the speed detection resolution of the radar, and the detection spectrum information of the absolute stationary target is extracted from the target detection spectrum information according to the position index.

5. The radar view limited scene recognition method according to any one of claims 1 to 3, wherein a maximum value of amplitude differences between all adjacent peaks and troughs in a full-distance section of detection spectrum information of the absolute stationary target is found, and a median value between the peak and the trough at the maximum value is taken as the amplitude dip point H according to a distance value corresponding to each peak and trough at the maximum value.

6. The radar view-limited scene recognition method according to any one of claims 1 to 3, wherein after the amplitude dip point H is obtained, the average values of the amplitudes within a preset distance range from the top to the bottom of the detection spectrum information of the absolute stationary target are counted and are respectively denoted as AmpMean _ b and AmpMean _ a, and the difference AmpMean Diff between AmpMea _ b and AmpMean _ a is further obtained as the amplitude distribution difference.

7. The radar view limitation scene recognition method according to any one of claims 1 to 3, wherein after the amplitude dip point H is obtained, probability density functions of amplitude values within a preset distance range from the top to the bottom of the detection spectrum information of the absolute stationary target are counted, and a difference between the two probability density functions is further obtained as the amplitude distribution difference.

8. The radar view limited scene recognition method according to any one of claims 1 to 3, wherein after the amplitude dip point H is obtained, a sum of amplitude values within a preset distance range from the top to the bottom of the detection spectrum information of the absolute stationary target is counted and is respectively denoted as AmpSam _ b and AmpSam _ a, and a difference AmpSam Diff between the AmpSam _ b and the AmpSam _ a is further obtained as the amplitude distribution difference.

9. A storage medium characterized in that it comprises instructions for implementing the radar view-limited scene recognition method according to any one of claims 1 to 3.

10. An in-vehicle device, characterized in that the in-vehicle device comprises a processor and the memory of claim 9, and the in-vehicle device calls the instructions of the storage medium through the processor to realize the radar view limited scene recognition method of any one of claims 1 to 3.

Technical Field

The invention relates to a radar view limited scene recognition method, a storage medium and vehicle-mounted equipment.

Background

Millimeter wave radar is widely applied to all trades, and most of application scenes thereof are outdoor, so the radar is easily covered or shielded by silt and the like, the Field of View (FOV) of the radar is limited, and the normal use is influenced. Therefore, one of the important functions of radar self-detection is radar occlusion self-detection. For example, in the automotive field, on the electromagnetic wave transmission path of a vehicle-mounted radar, a second surface (such as an antenna cover, a bumper, a vehicle logo, and the like) right in front of the radar is directly covered with foreign matter such as sludge, ice, and snow. The condition that direct foreign matter covers and leads to the radar to be sheltered from will make the radar electromagnetic wave normally propagate in the environment, leads to the unable environmental information of knowing the automobile body normally of radar, and the performance of radar receives serious influence, even functional failure. Therefore, the radar needs to have a shielding self-detection function.

Generally, the shielding self-detection function of the radar realizes the judgment of the existence or nonexistence of foreign object coverage of the radar according to the radar echo in a time domain or a frequency domain and the related representation of an environmental target, thereby realizing shielding detection. However, during actual driving, a 'difficult' scene affecting the accuracy of occlusion detection is easily encountered, such as a full field of view limited scene (i.e., a full FOV limited scene) of radar. A full FOV restricted scene is defined as a scene surrounded by objects at close range at various angles within the full FOV area of the radar. Fig. 1 and 2 show two typical full field of view restricted scenes. The surrounding object may be a combination of objects having strong electromagnetic wave attenuation performance and strong electromagnetic wave reflection performance, or may be an individual object having strong electromagnetic wave reflection performance. The influence of the full FOV-limited scene on radar detection is directly reflected in that in the scene, electromagnetic wave transmission is blocked, and long-distance target detection cannot be performed. Taking the automotive field as an example, typical full FOV restricted scenes are, for example, parking lots surrounded by walls and nearby vehicles around the radar, traffic lights for congestion, and the like. When the radar is in a full FOV limited scene, even if the second surface of the radar is not directly covered by foreign matters, the transmission of electromagnetic waves is also blocked, the signal representation on a certain distance is almost consistent with the representation when the second surface of the radar is directly covered by the foreign matters, and a 'false' shielding characteristic is shown.

The application publication number is CN112485770A, and the application publication date is 2021, 3 and 12, Chinese patent application discloses a method for a millimeter wave radar full FOV restricted scene, wherein a one-dimensional FFT spectral line corresponding to a millimeter wave radar echo is used as input, a one-dimensional FFT spectral line change rate boundary value under a non-full FOV restricted conventional driving scene and a full FOV restricted scene is obtained through statistical calculation, and then a judgment threshold value for full FOV restricted scene identification is determined. However, this method is somewhat computationally complex and leaves room for improvement.

Disclosure of Invention

The invention aims to provide an effective and easily-realized radar view-limited scene identification method.

A radar view-limited scene recognition method applied to a radar on a mobile device, the radar view-limited scene recognition method comprising a full view-limited scene recognition step S10, wherein the step S10 comprises:

s101, obtaining target detection frequency spectrum information based on radar echo;

s102, extracting detection frequency spectrum information of an absolute static target in the target detection frequency spectrum information according to the moving speed of the mobile equipment and the parameter index of the radar;

s103, calculating an amplitude dip point H of the detection frequency spectrum information of the absolute stationary target;

s104, calculating the amplitude distribution difference in the preset distance section before and after the amplitude dip point H; and

and S105, when the amplitude distribution difference is larger than or equal to a preset threshold value, judging that the current radar environment is a full-view-limited scene, otherwise, judging that the current radar environment is a common non-view-limited scene.

As an embodiment, after the radar is started, whether the radar is in an occlusion state is determined, and when the determination is yes, the step S10 is executed, and when the environment where the radar is currently located is a normal non-view-limited scene as a result of the step S10, an occlusion alarm signal is output, and when the environment where the radar is currently located is a full-view-limited scene as a result of the step S10, the radar is not occluded, and the detection is completed.

As another embodiment, after the radar is started, the step S10 is first executed, and when the current environment where the radar is located is a normal non-view-limited scene, the radar occlusion detection function is executed, otherwise, the radar occlusion detection function is cancelled.

As an embodiment, after performing two-dimensional fourier transform on echoes of each channel of the radar, performing coherent or non-coherent accumulation on fourier transform results of each channel to obtain target detection spectrum information, then calculating a corresponding spectrum information position index of an environment absolute stationary target in the target detection spectrum information according to the moving speed and the speed detection resolution of the radar, and extracting detection spectrum information of the absolute stationary target from the target detection spectrum information according to the position index.

As an implementation manner, the maximum value of the amplitude difference values of all adjacent peaks and troughs in the full-distance segment of the detection spectrum information of the absolute stationary target is found, and then the intermediate value between the peak value and the trough at the maximum value is taken as the amplitude dip point H according to the distance value respectively corresponding to the peak value and the trough at the maximum value.

As an embodiment, after the amplitude dip point H is obtained, the average values of the amplitudes in the preset distance sections before and after the detection spectrum information of the absolute stationary target are counted and respectively recorded as amppeak _ b and amppeak _ a, and the difference amppeak diff between amppeak _ b and amppeak _ a is further obtained as the amplitude distribution difference.

As another embodiment, after the amplitude dip point H is obtained, the probability density functions of the amplitude values in the preset distance segments before and after the detection spectrum information of the absolute stationary target are counted, and the difference between the two probability density functions is further obtained as the amplitude distribution difference.

As another embodiment, after the amplitude dip point H is obtained, the sum of the amplitude values in the preset distance range before and after the detection spectrum information of the absolute stationary target is counted and respectively recorded as AmpSum _ b and AmpSum _ a, and the difference AmpSum diff between AmpSum _ b and AmpSum _ a is further obtained as the amplitude distribution difference.

The invention also provides a storage medium which comprises instructions for implementing the radar view limited scene identification method.

The invention also provides vehicle-mounted equipment which comprises a processor and the memory, wherein the vehicle-mounted equipment calls the instruction of the storage medium through the processor so as to realize the radar view limited scene identification method.

The radar view-limited scene recognition method is suitable for radars loaded on movable equipment, in particular millimeter-wave radars. The method fully considers the possible situation of the full-view limited scene, effectively realizes the self-detection of the radar on the situation of the self-view limited scene, has strong robustness and low calculation force requirement, improves the accuracy of the self-diagnosis function of the radar system, and meets the real-time performance of radar diagnosis application.

Drawings

Fig. 1 is a typical radar full field of view restricted scenario.

Fig. 2 is another typical radar full field of view restricted scenario.

Fig. 3 is a flowchart of a radar view-limited scene recognition method in an embodiment.

Fig. 4 is a comparison of the detected spectrum information distribution of the absolute stationary target in the full view limited scene and the normal view limited scene.

Fig. 5 is a schematic diagram of a detection result of a radar full-view limited scene.

Fig. 6 is a radar occlusion detection process based on the radar view-limited scene recognition method in an extended embodiment.

FIG. 7 is a radar occlusion detection process based on the radar view-limited scene recognition method in another extended embodiment.

Detailed Description

The radar view-limited scene recognition method, the storage medium, and the vehicle-mounted device according to the present invention will be described in further detail with reference to the following embodiments and accompanying drawings.

The radar view limited scene identification method mainly aims to identify the condition that the radar view is limited, belongs to a full view (FOV) limited scene, and is also general shielding caused by the fact that foreign matters such as sludge, ice and snow directly cover the second surface of the scene, so that an alarm signal is effectively sent out, and the accuracy of the self-diagnosis function of a radar system is improved.

The method for identifying the full-view-limited scene can be applied to radar on movable equipment, such as a vehicle-mounted radar, particularly a millimeter wave radar, for example, can be applied to shielding detection of a vehicle-mounted rear angle radar and a vehicle head front view radar, and can also be applied to trunk door opening early warning. For example, when the radar detects a scene with a limited full FOV (field of view), namely a large-area obstacle (an object in the environment of the vehicle body is surrounded) exists in a short distance around the radar (the vehicle body), the driver needs to be reminded that a trunk is opened and a collision danger exists. The general idea of the method can be further applied to other radars such as ultrasonic radars, laser radars and the like.

The identification of the scene with the limited full view field is based on the characteristics of an electromagnetic wave transmission path in the scene with the limited full view field, in the scene, the electromagnetic wave transmission is mainly shown in that after being transmitted in a short distance, the electromagnetic wave of the full FOV of the radar is attenuated by surrounding walls and objects or is reflected by vehicles to generate multipath signals, so that the electromagnetic wave of the radar cannot be transmitted to a longer distance, namely a long-distance target cannot be detected, meanwhile, no clutter enters a receiving end in a long distance, finally, the radar is shown to have a strong echo and clutter in a short distance, and no target echo and clutter in a long distance, almost only system noise exists, and the radar visually shows that a dip point with amplitude appears at a certain distance point in a frequency domain. However, for a non-full FOV-limited scene (a normal view-limited scene), no matter whether the second surface of the radar is covered by a foreign object, the full-range segment has a distribution of targets, clutter or all system noise, and no obvious amplitude dip point appears in the frequency domain. That is, the electromagnetic wave signal representation in the full FOV limited scene is different from the signal representation in the conventional scene (i.e., not full FOV limited).

According to the difference in electromagnetic wave performance between the full FOV limited scene and the conventional radar view limited scene, the present embodiment provides a method for identifying the full FOV limited scene (defined as step S10, please refer to fig. 3) by using millimeter wave radar as an example, which includes the following steps S101 to S105.

S101, obtaining target Detection Spectrum (Detection Spectrum) information based on the radar echo. Specifically, after two-dimensional fourier transform is performed on echoes of each channel of the radar, incoherent accumulation is performed on fourier transform results of each channel, so as to obtain target detection frequency spectrum information (the frequency spectrum information simultaneously represents the speed and the distance value of a target). The target detection frequency spectrum information simultaneously comprises frequency spectrum information of targets which are absolutely static and non-static relative geodetic coordinate systems in the environment. The improvement of the target Signal-to-Noise Ratio (SNR) by applying the two-dimensional Fourier transform improves the performance of the algorithm.

S102, according to the moving speed information (in this embodiment, the moving speed of the radar itself) of the mobile device (in this embodiment, the motor vehicle) equipped with the radar and the parameter index of the radar system, in this embodiment, the speed detection resolution, a doppler dimension index of the spectrum information corresponding to the absolute stationary target (the relatively large terrestrial coordinate system) in the target detection spectrum is calculated, and the detection spectrum information of the absolute stationary target (for example, the building, the railing, the stationary vehicle, etc.) in the target detection spectrum information is extracted according to the doppler dimension index, that is, the detection spectrum information of the absolute stationary target is obtained.

And S103, calculating the amplitude dip point H of the detection frequency spectrum information of the absolute static target. The method for calculating the range dip point H is flexible, and for example, the maximum value of the range difference between all adjacent peaks and troughs in the full-range section of the detection spectrum information of the absolute stationary target may be found, and then the intermediate value between the peak and the trough at the maximum value is taken as the range dip point H. It should be noted that in a scene with a limited full FOV, the dip point is basically definite, whereas in a normal scene with a limited field of view, which is not limited by the full FOV, no amplitude dip occurs because a distant object can be detected continuously, and therefore the point found based on the amplitude dip point algorithm is a pseudo dip point with random jump. Fig. 4 shows a comparison of the detected spectral information distribution of absolute stationary targets for a full view limited scene versus a normal view limited scene.

S104, calculating the amplitude distribution difference in the preset distance sections before and after the amplitude dip point H, and aiming at extracting the amplitude distribution statistical difference of targets, clutters or system noises before and after the short-distance section and the long-distance section of the radar in the environment, namely the amplitude dip point. The calculation method of the amplitude distribution difference may be one of the following three methods: (1) counting the amplitude mean values in the preset distance section before and after the detection frequency spectrum information of the absolute stationary target, respectively recording the amplitude mean values as Amppeak _ b and Amppeak _ a, and further obtaining the difference Amppeak Diff between Amppeak _ b and Amppeak _ a as the amplitude distribution difference; (2) counting the sum of amplitude values in a preset distance range before and after the detection frequency spectrum information of the absolute stationary target, respectively recording the sum as AmpSam _ b and AmpSam _ a, and further obtaining the difference AmpSam Diff between the AmpSam _ b and the AmpSam _ a as the amplitude distribution difference; (3) and counting probability density functions of amplitude values in the preset distance section before and after the detection frequency spectrum information of the absolute stationary target, and further obtaining a difference value of the two probability density functions as the amplitude distribution difference.

And S105, when the amplitude distribution difference is larger than or equal to a preset threshold value, judging that the current radar environment is a full-view-limited scene, otherwise, judging that the current radar environment is a common non-view-limited scene. The main idea of obtaining the preset threshold is to count the difference of the average values of the amplitudes of the designated distance segments before and after the amplitude dip point when the second surface on the electromagnetic wave propagation path of the millimeter wave radar is not covered by the foreign object in the normal view limited scene, and count the difference of the average values of the amplitudes of the designated distance segments before and after the amplitude dip point in the different limited distances of the full FOV limited scene in the same manner when the second surface on the electromagnetic wave propagation path of the millimeter wave radar is not covered by the foreign object in the distance range of the radar detectable target, that is, the statistical boundary of the distribution difference before and after the amplitude dip point in the normal view limited scene and the full view limited scene is obtained, and determine the decision threshold (denoted as ampmeasdiff _ Thrd) for the full view limited scene identification based on the statistical boundary, and finally, the threshold is used as the input of a full-view limited scene recognition algorithm, so that the real-time full-view limited scene self-detection of the millimeter wave radar is realized, and the self-adaptive judgment of 'false' shielding caused by shielding due to true direct foreign matter coverage and non-direct foreign matter coverage (full-view limited scene) by the millimeter wave radar is completed. Fig. 5 shows radar full view limited scene detection results.

In an extended embodiment, please refer to fig. 6, when the method for identifying a full-view limited scene in the foregoing embodiment is applied to a radar occlusion detection function for performing occlusion detection, the method may include the following steps:

and after the radar is started, radar shielding detection is executed, if the radar is judged to be in a shielding state, the step S10 is executed, in addition, when the result of the step S10 is that the environment where the radar is located is in a common non-view-limited shielding state, a warning signal that the radar is shielded is output, and when the result of the step S10 is that the environment where the radar is located is in a full-view-limited scene, the radar is not shielded, and the detection is completed.

In another extended embodiment, please refer to fig. 7, when the method for identifying a full-view limited scene in the foregoing embodiment is applied to a radar occlusion detection function, the method may include the following steps:

and (4) outputting the result of the step (S10) to a radar occlusion detection process after the radar is started, and canceling an occlusion detection function when the result of the step (S10) indicates that the current environment of the radar is a common non-view-limited scene, otherwise, executing the occlusion detection function.

In summary, the method for identifying a scene with a limited radar field of view takes a target Detection Spectrum (Detection Spectrum) corresponding to a radar echo as an input, calculates a doppler dimension index corresponding to an absolute stationary target (a building, a railing, a stationary vehicle, etc.) in an environment on the target Detection Spectrum according to vehicle speed information and relevant parameter indexes of a radar system, extracts full-distance section data based on the index, records the data as 'Detection Spectrum information of the absolute stationary target', extracts distribution differences of targets and clutter in a short-distance section, a long-distance section (range radar) in the environment based on the 'Detection Spectrum information of the absolute stationary target', and further judges whether the current radar is in a scene with a limited FOV according to the difference degree.

The radar vision limited scene recognition method is suitable for radars loaded on movable equipment. The method fully considers the possible situation of the full-view limited scene, effectively realizes the self-detection of the radar on the situation of the self-view limited scene, has strong robustness and low calculation force requirement, improves the accuracy of the self-diagnosis function of the radar system, and meets the real-time performance of radar diagnosis application.

In other embodiments, the amplitude distribution difference ampmeasdiff obtained by subtracting the amplitude mean values of the front and rear designated distance segments of the amplitude dip point may also be obtained by using other calculation methods for the amplitude mean values of the front and rear designated distance segments of the amplitude dip point, such as using system noise as a reference value, calculating the specific gravity of the amplitude mean values of the front and rear designated distance segments of the amplitude dip point relative to the reference value, and further calculating the difference of the specific gravity.

In other embodiments, in step S101, the target detection spectrum information is obtained by performing non-coherent accumulation on the fourier transform result of each receiving channel signal, or may be obtained by performing coherent accumulation on the fourier transform result of each receiving channel signal.

In practical applications, the method is stored in a storage medium in the form of instructions, and the storage medium is loaded in an on-board device or other electronic device provided with a processor, and the on-board device or other electronic device can call the instructions in the storage medium through the processor to realize the radar view limitation scene recognition method.

While the invention has been described in conjunction with the specific embodiments set forth above, it is evident that many alternatives, modifications, and variations will be apparent to those skilled in the art in light of the foregoing description. Accordingly, it is intended to embrace all such alternatives, modifications, and variations that fall within the spirit and scope of the appended claims.

12页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:多雷达数据融合方法、装置、存储介质和设备

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!