Road environment comprehensive cognition method based on multi-sensor information fusion

文档序号:1963156 发布日期:2021-12-14 浏览:14次 中文

阅读说明:本技术 一种基于多传感器信息融合的道路环境综合认知方法 (Road environment comprehensive cognition method based on multi-sensor information fusion ) 是由 金连成 汪瑾 王爱伶 尹成俊 于 2021-08-20 设计创作,主要内容包括:本发明公开了一种基于多传感器信息融合的道路环境综合认知方法,包括S1、配置毫米波雷达参数,根据回波数据完成测距和测角,采用帧差法显示动目标,初步分离出道路上的各个运动车辆后,采用恒虚警检测进行杂波抑制,确定车辆所在位置,并获取各探测帧内的车流量信息;S2、基于S1中获取的各个车辆所在的距离门位置,将速度与每个位置上的车辆一一对应;S3、标定毫米波雷达测量数据与能见度之间的函数关系;S4、对监控数据进行车辆识别、计数和态势显示,实时判断道路是否发生拥堵、事故或道路结冰状况;S5、配置能见度传感器;S6、配置气象6要素传感器S7、结合卷积神经网络进行多传感器数据融合,以实现对道路环境的综合认知判断。(The invention discloses a road environment comprehensive cognition method based on multi-sensor information fusion, which comprises the steps of S1, configuring millimeter wave radar parameters, finishing distance measurement and angle measurement according to echo data, displaying moving targets by adopting a frame difference method, after preliminarily separating each moving vehicle on a road, performing clutter suppression by adopting constant false alarm detection, determining the position of the vehicle, and acquiring traffic flow information in each detection frame; s2, corresponding the speed to the vehicle at each position one by one based on the distance door position where each vehicle is located acquired in S1; s3, calibrating a functional relation between the millimeter wave radar measurement data and visibility; s4, vehicle identification, counting and situation display are carried out on the monitoring data, and whether the road is congested, accidents or road icing conditions are judged in real time; s5, configuring a visibility sensor; s6, configuring a meteorological 6 element sensor S7, and combining a convolutional neural network to perform multi-sensor data fusion so as to realize comprehensive cognitive judgment on the road environment.)

1. A road environment comprehensive cognition method based on multi-sensor information fusion is characterized by comprising the following steps:

s1, configuring millimeter wave radar parameters, completing distance measurement and angle measurement according to echo data, displaying moving targets by adopting a frame difference method, preliminarily separating each moving vehicle on the road, performing clutter suppression by adopting constant false alarm detection, determining the position of the vehicle, and acquiring traffic flow information in each detection frame; performing track association on multiple targets in multiple frames to obtain a traveling track of the vehicle;

s2, acquiring real-time traveling speed information of the vehicles through Fourier transformation in distance dimension and Doppler dimension in the original radar echo based on the position of the range gate where each vehicle is located acquired in S1, and enabling the speed to correspond to the vehicles at each position one by one;

s3, calibrating a functional relation between the millimeter wave radar measurement data and visibility;

s4, configuring a digital camera and combining a digital image processing algorithm, carrying out vehicle identification, counting and situation display on the monitoring data, and judging whether the road is congested, accidents or road icing conditions in real time;

s5, configuring a visibility sensor, transmitting visibility data to an upper computer in real time, and displaying abnormal visibility data in real time;

s6, configuring a meteorological 6 element sensor, and uploading meteorological data of wind speed, temperature, humidity, air pressure, rainfall and illumination 6 elements in real time;

and S7, constructing a data set according to the multi-sensor data in the S1 to S6, and combining a convolutional neural network to perform multi-sensor data fusion so as to realize comprehensive cognitive judgment on the road environment.

2. The method for comprehensively recognizing the road environment based on the multi-sensor information fusion as claimed in claim 1, wherein the millimeter wave radar parameters are configured in the step S1, the distance measurement and the angle measurement are completed according to the echo data, the moving target is displayed by using a frame difference method, after each moving vehicle on the road is preliminarily separated, clutter suppression is performed by using constant false alarm detection, the position of the vehicle is determined, and the traffic flow information in each detection frame is obtained; performing track association on multiple targets in multiple frames to obtain a vehicle travelling track, specifically comprising the following steps:

configuring millimeter wave radar parameters:

wherein R isresFor distance resolution, c is the speed of light in vacuum 3X 108m/s, B is the radar bandwidth;

obtaining a difference frequency signal S of the echo signal and the transmitting signal after passing through a low-pass filterbFourier transform is carried out on the distance direction to carry out frequency modulation removal processing to obtain difference frequency fIFAnd calculating the distance of the target according to the conversion of the frequency and the distance:

wherein d is the distance of the target, and k is the frequency modulation slope;

calculating an angle measurement by using a capon algorithm:

w is weight vector, R is covariance matrix of radar antenna receiving signal, and P (w) wHRw, which is the average power of the output; the constraint of the above formula is wHa (θ) is 1, and a (θ) is a transmission source direction vector from the direction of arrival θ;

the Capon algorithm minimizes the power contributed by noise and any interference from non-theta directions and keeps the signal power in the observation direction theta constant. Optimal weight vector wCAPSolving by adopting Lagrange multiplier method:

wherein, aHThe (theta) is the conjugate transpose of a (theta), and the space spectrum P containing each target is obtained by combining constraint conditionsCAP

After each frame of data is subjected to spatial spectrum calculation, a frame difference method is adopted for displaying moving targets, after each moving vehicle on a road is preliminarily separated, a unit average constant false alarm detector is adopted for carrying out clutter suppression, the position of the vehicle is determined, and traffic flow information in each detection frame is obtained;

the CA-CFAR is adopted to detect each point in the space spectrum, and the fixed false alarm probability isTaking a point L around the detection unit as a reference unit and recording as c (L), the threshold factor alpha is:

the detection threshold TH is obtained as:

and obtaining multi-target positions in multiple frames through threshold division, and performing track association according to extended Kalman filtering to obtain the advancing track of the vehicle.

3. The method for comprehensively recognizing the road environment based on the multi-sensor information fusion as claimed in claim 1, wherein in step S2, based on the position of the range gate where each vehicle is located, which is obtained in S1, the real-time traveling speed information of the vehicle is obtained in the original radar echo through fourier transform in the range dimension and the doppler dimension, and the speed is in one-to-one correspondence with the vehicle at each position, specifically comprising:

the frequency of the maximum value of the Doppler dimension at each range, i.e. the Doppler frequency f of each vehicle, is determined in the raw radar echo by means of Fourier transformation in the range dimension and the Doppler dimensiondAcquiring the real-time traveling speed v-2 f of the vehicledAnd/λ, and the speed is one-to-one associated with the vehicle at each position.

4. The method for comprehensively recognizing the road environment based on the multi-sensor information fusion as claimed in claim 1, wherein the step S3 of calibrating the functional relationship between the millimeter wave radar measurement data and the visibility specifically comprises:

wherein k isiIs a scattering extinction coefficient, riIs the particle radius, niIs a radius riThe number of particles of (a);

radar reflectivity factors Z and ri,niThe relationship of (1) is:

Z=∑ini(2ri)6

5. the method for comprehensively recognizing the road environment based on the multi-sensor information fusion as claimed in claim 1, wherein the image processing algorithm in the step S4 includes:

and searching for known objects by using an OpenCV (open vehicle vision library) through Features2D and homographic, identifying moving targets in the video images, classifying the targets of vehicles and pedestrians in the video, completing vehicle identification, counting and situation display of monitoring data, and judging whether the road is congested, in-accident conditions or not in real time.

6. The method for comprehensively recognizing the road environment based on the multi-sensor information fusion as claimed in claim 1, wherein in the step S7, a data set is constructed according to the multi-sensor data in S1 to S6, and the multi-sensor data fusion is performed in combination with a convolutional neural network, so as to realize comprehensive recognition judgment of the road environment, which specifically comprises:

inputting the data set into a convolutional neural network for multi-sensor information fusion;

the convolutional neural network comprises 2 convolutional layers and 2 pooling layers, and a ReLU activation function is adopted;

the training labels comprise traffic events and meteorological information, and the comprehensive cognitive judgment on the road environment is completed by combining the directly acquired road environment data.

7. The road environment comprehensive cognition method based on multi-sensor information fusion of claim 6 is characterized in that: the convolution neural network input data comprises vehicle flow, vehicle speed, visibility all day, accumulated road water, snow cover and icing conditions detected by a millimeter wave radar, video images detected by a video sensor, visibility data acquired by a visibility sensor and real-time meteorological data of 6 elements including wind speed, temperature, humidity, air pressure, rainfall and illumination acquired by a meteorological 6 element sensor;

the output data comprises traffic events and meteorological information, and the comprehensive cognitive judgment on the road environment is completed by combining the directly acquired road environment data.

Technical Field

The invention belongs to the technical field of millimeter wave radar signal processing, and particularly relates to a road environment comprehensive cognition method based on multi-sensor information fusion.

Background

Road traffic accidents cause huge economic losses as one of the important factors harming personal safety, and are valued by all circles and all departments of society for a long time.

At present, the measurement and cognition of the road environment mainly comprise traffic flow, weather and meteorological information, traffic events, traffic control information, construction information and congestion conditions. For each type of information, there are different sensors for measurement and monitoring, such as cameras, visibility meters, weather sensors, etc. The increase of sensor has brought the overlapping of sensor function, equipment fixing dispersion, various standards are various, data transmission is asynchronous, sensor data utilizes insufficient scheduling problem when obtaining abundanter and accurate road environment data, covers traffic road and intelligent road development in-process, has the problem of "heavy construction, light operation maintenance and overall management". At present, a single sensor cannot realize accurate cognition on road environment information in various practical situations and large-scale deployment.

Disclosure of Invention

The present invention aims to solve or improve the above-mentioned problems by providing a road environment comprehensive cognition method based on multi-sensor information fusion.

In order to achieve the purpose, the invention adopts the technical scheme that:

a road environment comprehensive cognition method based on multi-sensor information fusion comprises the following steps:

s1, configuring millimeter wave radar parameters, completing distance measurement and angle measurement according to echo data, displaying moving targets by adopting a frame difference method, preliminarily separating each moving vehicle on the road, performing clutter suppression by adopting constant false alarm detection, determining the position of the vehicle, and acquiring traffic flow information in each detection frame; performing track association on multiple targets in multiple frames to obtain a traveling track of the vehicle;

s2, acquiring real-time traveling speed information of the vehicles through Fourier transformation in distance dimension and Doppler dimension in the original radar echo based on the position of the range gate where each vehicle is located acquired in S1, and enabling the speed to correspond to the vehicles at each position one by one;

s3, calibrating a functional relation between the millimeter wave radar measurement data and visibility;

s4, configuring a digital camera and combining a digital image processing algorithm, carrying out vehicle identification, counting and situation display on the monitoring data, and judging whether the road is congested, accidents or road icing conditions in real time; when visibility is below a threshold;

s5, configuring a visibility sensor, transmitting visibility data to an upper computer in real time, and displaying abnormal visibility data in real time;

s6, configuring a meteorological 6 element sensor, and uploading meteorological data of wind speed, temperature, humidity, air pressure, rainfall and illumination 6 elements in real time;

and S7, constructing a data set according to the multi-sensor data in the S1 to S6, and combining a convolutional neural network to perform multi-sensor data fusion so as to realize comprehensive cognitive judgment on the road environment.

Further, configuring millimeter wave radar parameters in step S1, completing ranging and angle measurement according to echo data, displaying moving targets by using a frame difference method, preliminarily separating each moving vehicle on the road, performing clutter suppression by using constant false alarm detection, determining the position of the vehicle, and acquiring traffic flow information in each detection frame; performing track association on multiple targets in multiple frames to obtain a vehicle travelling track, specifically comprising the following steps:

configuring millimeter wave radar parameters:

wherein R isresFor distance resolution, c is the speed of light in vacuum 3X 108m/s, B is the radar bandwidth;

obtaining a difference frequency signal S of the echo signal and the transmitting signal after passing through a low-pass filterbFourier transform is carried out on the distance direction to carry out frequency modulation removal processing to obtain difference frequency fIFAnd calculating the distance of the target according to the conversion of the frequency and the distance:

wherein d is the distance of the target, and k is the frequency modulation slope;

calculating an angle measurement by using a capon algorithm:

w is weight vector, R is covariance matrix of radar antenna receiving signal, and P (w) wHRw, which is the average power of the output; the constraint of the above formula is wHa (θ) is 1, and a (θ) is a transmission source direction vector from the direction of arrival θ;

the Capon algorithm minimizes the power contributed by noise and any interference from non-theta directions and keeps the signal power in the observation direction theta constant. Optimal weight vector wCAPSolving by adopting Lagrange multiplier method:

wherein, aHThe (theta) is the conjugate transpose of a (theta), and the space spectrum P containing each target is obtained by combining constraint conditionsCAP

After each frame of data is subjected to spatial spectrum calculation, a frame difference method is adopted for displaying moving targets, after each moving vehicle on a road is preliminarily separated, a unit average constant false alarm detector is adopted for carrying out clutter suppression, the position of the vehicle is determined, and traffic flow information in each detection frame is obtained;

the CA-CFAR is adopted to detect each point in the space spectrum, and the fixed false alarm probability isTaking a point L around the detection unit as a reference unit and recording as c (L), the threshold factor alpha is:

the detection threshold TH is obtained as:

and obtaining multi-target positions in multiple frames through threshold division, and performing track association according to extended Kalman filtering to obtain the advancing track of the vehicle.

Further, in step S2, based on the position of the range gate where each vehicle is located, which is obtained in S1, the real-time traveling speed information of the vehicle is obtained through fourier transform in the range dimension and the doppler dimension in the original radar echo, and the speed is in one-to-one correspondence with the vehicle at each position, which specifically includes:

the frequency of the maximum value of the Doppler dimension at each range, i.e. the Doppler frequency f of each vehicle, is determined in the raw radar echo by means of Fourier transformation in the range dimension and the Doppler dimensiondAcquiring the real-time traveling speed v-2 f of the vehicledAnd/λ, and the speed is one-to-one associated with the vehicle at each position.

Further, in step S3, the step of calibrating the functional relationship between the millimeter wave radar measurement data and the visibility specifically includes:

wherein k isiIs a scattering extinction coefficient, riIs the particle radius, niIs a radius riThe number of particles of (a);

radar reflectivity factors Z and ri,niThe relationship of (1) is:

Z=∑ini(2ri)6

further, the image processing algorithm in step S4 includes:

and searching for known objects by using an OpenCV (open vehicle vision library) through Features2D and homographic, identifying moving targets in the video images, classifying the targets of vehicles and pedestrians in the video, completing vehicle identification, counting and situation display of monitoring data, and judging whether the road is congested, in-accident conditions or not in real time.

Further, in step S7, a data set is constructed according to the multi-sensor data in S1 to S6, and multi-sensor data fusion is performed in combination with the convolutional neural network, so as to realize comprehensive cognitive judgment on the road environment, specifically including:

inputting the data set into a convolutional neural network for multi-sensor information fusion;

the convolutional neural network comprises 2 convolutional layers and 2 pooling layers, and a ReLU activation function is adopted;

the training labels comprise traffic events and meteorological information, and the comprehensive cognitive judgment on the road environment is completed by combining the directly acquired road environment data.

Further, the convolution neural network input data comprises vehicle flow, vehicle speed, visibility all day long, road ponding, snow cover and icing conditions detected by a millimeter wave radar, video images detected by a video sensor, visibility data acquired by a visibility sensor, and wind speed, temperature, humidity, air pressure, rainfall and illumination 6-element real-time meteorological data acquired by a meteorological 6-element sensor;

the output data comprises traffic events and meteorological information, and the comprehensive cognitive judgment on the road environment is completed by combining the directly acquired road environment data.

The road environment comprehensive cognition method based on multi-sensor information fusion has the following beneficial effects:

the invention designs a road environment comprehensive cognition device which comprehensively utilizes various sensors in a changeable external environment, comprehensively utilizes various environmental information based on the information fusion of the various sensors, can work all day long and all weather, and has low cost and high accuracy.

Drawings

FIG. 1 is a schematic block diagram of a road environment comprehensive cognition method based on multi-sensor information fusion.

Detailed Description

The following description of the embodiments of the present invention is provided to facilitate the understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and it will be apparent to those skilled in the art that various changes may be made without departing from the spirit and scope of the invention as defined and defined in the appended claims, and all matters produced by the invention using the inventive concept are protected.

Referring to fig. 1, the road environment comprehensive cognition method based on multi-sensor information fusion in the scheme includes the following steps:

step S1, configuring millimeter wave radar parameters, completing distance measurement and angle measurement according to echo data, displaying moving targets by adopting a frame difference method, preliminarily separating each moving vehicle on the road, then performing clutter suppression by adopting constant false alarm detection, determining the position of the vehicle, and acquiring traffic flow information in each detection frame; performing track association on multiple targets in multiple frames to obtain a vehicle travelling track, wherein the track association specifically comprises the following steps:

determining radar configuration parameters, wherein the radar bandwidth B and the number N of sampling points in a single pulse repetition period are calculated according to the following formula:

wherein R isresFor distance resolution, c is the speed of light in vacuum 3X 108m/s;

Obtaining a difference frequency signal S of the echo signal and the transmitting signal after passing through a low-pass filterbAnd performing Fourier transform (FFT) on the distance direction to the fast time to realize frequency modulation removing processing (De-chirp) to obtain a difference frequency fIFAnd completing high-precision distance measurement through frequency and distance conversion:

wherein d is the distance of the target, and k is the frequency modulation slope;

and then, matching with a Capon algorithm to complete high-precision angle measurement, wherein the optimization problem solved by the Capon algorithm can be expressed as follows:

wherein w is a weight vector, and R is a covariance matrix of signals received by the radar antenna;

constraint of wHa (θ) is 1, and a (θ) is the transmission source direction vector from the direction of arrival θ.

The optimal weighting vector can be solved by Lagrange multiplier method:

substituting this equation into the constraint can yield the spatial spectrum:

and then a spatial spectrum containing each target is obtained.

After each frame of data is subjected to the operation, moving target display is realized by using a frame difference method, after each moving vehicle on a road is preliminarily separated, clutter suppression is realized by using a unit average constant false alarm rate detector (CA-CFAR), the position of the vehicle is determined, and traffic flow information in each detection frame is obtained.

The CA-CFAR is adopted to detect each point in the space spectrum, and the fixed false alarm probability isTaking the L point around the detection unit as a reference unit and recording as c (L), the threshold factor is:

the detection threshold can be found to be:

and obtaining multi-target positions in multiple frames through threshold division, and then performing track association through extended Kalman filtering to obtain the advancing track of the vehicle.

Step S2, based on the position of the range gate where each vehicle is located obtained in S1, obtaining real-time traveling speed information of the vehicle through fourier transform in the range dimension and the doppler dimension in the original radar echo, and corresponding the speed to the vehicle at each position one to one, which specifically includes:

in the original radar echo, the frequency of the maximum value of the Doppler dimension at each distance, namely the Doppler frequency f of each vehicle is found through Fourier transformation of the distance dimension and the Doppler dimensiondThereby acquiring the real-time traveling speed v-2 f of the vehicledAnd the lambda is used for realizing high-precision real-time speed measurement and enabling the speed to correspond to the vehicles at each position one by one.

Step S3, calibrating the functional relationship between the millimeter wave radar measurement data and the visibility, which specifically comprises:

in order to obtain the visibility parameters, firstly, the relation between the millimeter wave radar measurement data and the visibility is calibrated. The visibility is uniquely determined by the extinction coefficient σ, whose value is related to the total amount of particles in the air, i.e., the size of each particle, and is expressed as:

wherein k isiDenotes the scattering extinction coefficient, riDenotes the particle radius, niDenotes a radius riThe number of particles of (a); radar reflectivity factors Z and ri,niThe relationship of (c) is expressed as:

Z=∑ini(2ri)6

therefore, radar reflectivity factor can be used as the basis for representing visibility, the relation between the radar reflectivity factor and visibility is calibrated indoors or outdoors in a manual simulation laboratory, the millimeter wave radar is placed at a fixed position, the angle reflector is placed at a distance away from the radar, the transmitted radar waves irradiate the angle reflector through a simulated air environment, reflected echoes are received by the radar after passing through the simulated air environment, relevant data of the radar reflectivity factor under the simulated air environment are recorded, and the conditions of accumulated water, accumulated snow and icing on the road can be judged according to the reflection condition of the echoes by the road surface.

Step S4, configuring a digital camera and combining a digital image processing algorithm, carrying out vehicle identification, counting and situation display on the monitoring data, and judging whether the road is congested, accidents or road icing conditions in real time; when the visibility is lower than the threshold, it specifically includes:

installing a digital camera, adjusting the angle, connecting and configuring a port of a computer, transmitting monitoring data, realizing local data receiving and transmitting and remote data receiving and transmitting, and displaying monitoring video information in real time on an upper computer

And (3) in combination with a digital image processing algorithm, searching for a known object by using an OpenCV (open CV) library through Features2D and homographic, identifying a moving target in a video image, classifying the vehicle and pedestrian targets in the video, completing vehicle identification, counting and situation display of monitoring data, and judging whether the road is congested, in-accident and in-road icing conditions in real time.

When the visibility is extremely poor and the illumination is seriously insufficient, the millimeter wave radar data in the steps S1 to S3 is mainly used.

And step S5, installing and configuring the visibility sensor, connecting and configuring a computer port, realizing local data receiving and transmitting and remote data receiving and transmitting, transmitting the data to the upper computer software in real time to provide accurate visibility data, and displaying the data in time when the visibility is abnormal.

And S6, installing and configuring a weather 6 element sensor, connecting and configuring a computer port, realizing local data transceiving and remote data transceiving, transmitting the data to an upper computer in real time, and providing real-time weather data of 6 elements of wind speed, temperature, humidity, air pressure, rainfall and illumination.

Step S7, the data set is constructed from the multi-sensor data obtained in steps S1 to S6.

The parameters in the data set comprise vehicle flow, vehicle speed, visibility all day, accumulated water on roads, snow and icing conditions detected by a millimeter wave radar, video images detected by a video sensor, visibility data acquired by a visibility meter, and wind speed, temperature, humidity, air pressure, rainfall and illumination 6-element real-time meteorological data acquired by a meteorological 6-element sensor, and are input into a convolutional neural network to realize multi-sensor information fusion.

The convolutional neural network includes 2 convolutional layers, 2 pooling layers, using the ReLU activation function.

The training labels comprise traffic events and meteorological information, and the comprehensive cognitive judgment on the road environment is completed by combining the directly acquired road environment data.

The convolutional neural network input data comprises vehicle flow, vehicle speed, visibility all day, accumulated road water, accumulated snow and icing conditions detected by a millimeter wave radar, video images detected by a video sensor, visibility data acquired by a visibility sensor and real-time meteorological data of 6 elements including wind speed, temperature, humidity, air pressure, rainfall and illumination acquired by a meteorological 6 element sensor.

The output data comprises traffic events and meteorological information, and the comprehensive cognitive judgment on the road environment is completed by combining the directly acquired road environment data.

The cross-layer connection of the artificial neural network is very similar to a data fusion model, and the artificial neural network is a completely parallel structure. The ultra-large-scale parallel information fusion processing system can realize multi-input signal fusion. After training, the model can quickly calculate corresponding visibility data through multi-sensor data obtained in other measuring processes.

The method is based on the visibility sensor, the meteorological 6 element sensor, the millimeter wave radar and the video sensor, and a neural network model for road environment comprehensive cognition is constructed so as to realize comprehensive cognition judgment on the road environment.

The millimeter wave radar has good environmental universality, wide coverage range, high resolution and strong penetrability, can work all day long and all weather, is very suitable for road environment cognition, provides long-distance high-precision traffic flow, driving speed and ground ponding icing, and can provide visibility information.

The video sensor is mainly based on digital images and video streaming, is greatly influenced by illumination and weather conditions, but is closer to human eyes, the recognition precision and the accuracy are generally higher than those of a millimeter wave radar sensor, the traffic flow, the weather condition and the road ponding icing condition can be visually provided, and the video sensor is matched with the millimeter wave radar sensor to be used for completing comprehensive detection of the current road environment.

Meanwhile, the visibility meter and the weather 6 element sensor are matched to measure accurate weather information, so that the detection and final identification accuracy and cognitive effect are improved.

According to the invention, the comprehensive cognitive judgment of the road environment is finally completed through the sensor data fusion method, and information such as speed limit, environmental risk, traffic incident, possible reasons of traffic time and the like is provided for a traffic management department, so that automatic, intelligent and high-accuracy management of the road is realized.

While the embodiments of the invention have been described in detail in connection with the accompanying drawings, it is not intended to limit the scope of the invention. Various modifications and changes may be made by those skilled in the art without inventive step within the scope of the appended claims.

11页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种基于轨迹判断的毫米波雷达手势识别方法及装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类