Data processing method and device, electronic equipment and readable storage medium

文档序号:1875563 发布日期:2021-11-23 浏览:30次 中文

阅读说明:本技术 一种数据处理方法、装置、电子设备及可读存储介质 (Data processing method and device, electronic equipment and readable storage medium ) 是由 封巍 于 2021-07-23 设计创作,主要内容包括:本公开的实施例提供了一种数据处理方法、装置、电子设备及可读存储介质,所述方法包括:接收来自于激光雷达的第一点云数据;确定所述激光雷达与安装所述激光雷达的车辆的相对位置信息;根据所述第一点云数据和所述激光雷达与所述车辆的相对位置信息确定目标点云数据,所述目标点云数据包含所述第一点云数据中各个点云点相对于所述车辆的第一位置信息;对所述目标点云数据进行数据渲染处理,得到目标视频切片并存储。本公开的实施例可以提高对自动驾驶车辆中安装的激光雷达采集的点云数据的处理效率,降低点云数据的播放时延和对客户端的性能要求,提高客户端下载、播放点云数据的灵活性和实时性,有利于实时感知自动驾驶车辆周围的环境信息。(The embodiment of the disclosure provides a data processing method, a data processing device, an electronic device and a readable storage medium, wherein the method comprises the following steps: receiving first point cloud data from a laser radar; determining relative position information of the laser radar and a vehicle on which the laser radar is mounted; determining target point cloud data according to the first point cloud data and the relative position information of the laser radar and the vehicle, wherein the target point cloud data comprises first position information of each point cloud point in the first point cloud data relative to the vehicle; and performing data rendering processing on the target point cloud data to obtain a target video slice and storing the target video slice. The embodiment of the disclosure can improve the processing efficiency of the point cloud data acquired by the laser radar installed in the automatic driving vehicle, reduce the playing time delay of the point cloud data and the performance requirement on the client, improve the flexibility and the real-time performance of downloading and playing the point cloud data by the client, and facilitate the real-time perception of the environmental information around the automatic driving vehicle.)

1. A data processing method is applied to a cloud server, and comprises the following steps:

receiving first point cloud data from a laser radar;

determining relative position information of the laser radar and a vehicle on which the laser radar is mounted;

determining target point cloud data according to the first point cloud data and the relative position information of the laser radar and the vehicle, wherein the target point cloud data comprises first position information of each point cloud point in the first point cloud data relative to the vehicle;

and performing data rendering processing on the target point cloud data to obtain a target video slice and storing the target video slice.

2. The method of claim 1, wherein the data rendering processing of the target point cloud data to obtain and store a target video slice comprises:

respectively performing data rendering processing on the target point cloud data based on a first preset visual angle and a second preset visual angle to obtain a first target video corresponding to the first preset visual angle and a second target video corresponding to the second preset visual angle;

and respectively carrying out slicing processing on the first target video and the second target video to obtain a first target video slice and a second target video slice corresponding to the first preset visual angle and storing the first target video slice and the second target video slice.

3. The method of claim 1, wherein determining target point cloud data from the first point cloud data and relative position information of the lidar and the vehicle comprises:

analyzing the first point cloud data to obtain second point cloud data, wherein the second point cloud data comprises second position information of each point cloud point in the first point cloud data relative to the laser radar;

and determining target point cloud data according to the relative position information of the laser radar and the vehicle and the second point cloud data.

4. The method of claim 1, wherein after determining target point cloud data from the first point cloud data and relative position information of the lidar and the vehicle, the method further comprises:

determining motion information of the vehicle;

performing motion compensation processing on the target point cloud data according to the motion information of the vehicle to obtain the target point cloud data after motion compensation;

the data rendering processing is carried out on the target point cloud data to obtain and store a target video slice, and the method comprises the following steps:

and performing data rendering processing on the target point cloud data after motion compensation to obtain a target video slice and storing the target video slice.

5. The method of claim 3, wherein the parsing the first point cloud data to obtain second point cloud data comprises:

analyzing the first point cloud data to obtain target azimuth data and target unit data of each point cloud point in the first point cloud data;

determining the horizontal angle offset and the vertical angle of the laser radar;

determining the horizontal angle of each point cloud point relative to the laser radar according to the horizontal angle of the laser radar and the target azimuth data of each point cloud point;

calculating the product of target unit data of each point cloud point and a preset constant to obtain the ranging distance between each point cloud point and the laser radar;

and determining second point cloud data according to the vertical angle of the laser radar, the horizontal angle of each point cloud point relative to the laser radar and the ranging distance between each point cloud point and the laser radar.

6. The method of claim 5, wherein determining second point cloud data from the vertical angle of the lidar, the horizontal angle of each point cloud point relative to the lidar, and the ranging distance between each point cloud point and the lidar comprises:

determining a polar coordinate value of each point cloud point in a polar coordinate system with the laser radar as an origin according to the vertical angle of the laser radar, the horizontal angle of each point cloud point relative to the laser radar and the ranging distance between each point cloud point and the laser radar;

and converting the polar coordinate value of each point cloud point into a rectangular coordinate value to obtain second point cloud data.

7. The method of claim 3, wherein determining target point cloud data from the relative position information of the lidar and the vehicle and the second point cloud data comprises:

and carrying out translation and rotation processing on the second position information of each point cloud point in the second point cloud data based on the relative position information of the laser radar and the vehicle to obtain target point cloud data.

8. A data processing apparatus, characterized in that the apparatus comprises:

the first point cloud data receiving module is used for receiving first point cloud data from the laser radar;

a position determination module for determining relative position information of the lidar and a vehicle on which the lidar is mounted;

the target point cloud data determining module is used for determining target point cloud data according to the first point cloud data and the relative position information of the laser radar and the vehicle, and the target point cloud data comprises first position information of each point cloud point in the first point cloud data relative to the vehicle;

and the data rendering module is used for performing data rendering processing on the target point cloud data to obtain and store a target video slice.

9. An electronic device, comprising:

processor, memory and computer program stored on the memory and executable on the processor, characterized in that the processor implements the data processing method according to one or more of claims 1 to 7 when executing the program.

10. A readable storage medium, characterized in that instructions in the storage medium, when executed by a processor of an electronic device, enable the electronic device to perform a data processing method according to one or more of method claims 1 to 7.

Technical Field

Embodiments of the present disclosure relate to the field of computer processing technologies, and in particular, to a data processing method and apparatus, an electronic device, and a readable storage medium.

Background

Lidar is one of the sensors commonly used in autopilot technology and plays an important role in perception, localization, planning and decision-making. The laser radar may be divided into a single line laser radar and a multi-line laser radar according to a line bundle. Among the common multiline lidar, there are 4-line lidar, 16-line lidar, 32-line lidar and 64-line lidar. The size of one frame of point cloud data generated by the laser radar (all data collected by the laser radar rotating 360 degrees) is different according to different line beams.

Taking the field of unmanned driving as an example, the unmanned equipment is usually provided with 32-line or 64-line laser radars in the front, and provided with less laser radars in the oblique front and the oblique back, and the total amount of point cloud data generated by each laser radar accounts for about 40% of the total data amount of the unmanned equipment.

For the point cloud data generated by the laser radar, a client is generally required to download the point cloud data from the laser radar, perform a series of data processing on the downloaded point cloud data, finally generate a video and play the laser radar data on a display in the form of the video. When the laser radar data is played through different clients, the steps of data downloading, data processing, video display and the like need to be executed again. Because the data volume of the laser radar is large, the data downloading is time-consuming, and when the laser radar data is played, the data can be played only after the data is completely downloaded, so that the playing time delay of the laser radar data is increased. In addition, because the data processing link includes a large amount of data calculation, the performance of the client has a high requirement, the common client is often difficult to meet the data processing requirement of the laser radar, and the data processing efficiency is low.

Disclosure of Invention

Embodiments of the present disclosure provide a data processing method and apparatus, an electronic device, and a readable storage medium, which can improve processing efficiency of point cloud data acquired by a laser radar, reduce playing delay of the point cloud data and performance requirements on a client, and improve flexibility and real-time performance of downloading and playing the point cloud data by the client.

According to a first aspect of embodiments of the present disclosure, there is provided a data processing method, the method including:

receiving first point cloud data from a laser radar;

determining relative position information of the laser radar and a vehicle on which the laser radar is mounted;

determining target point cloud data according to the first point cloud data and the relative position information of the laser radar and the vehicle, wherein the target point cloud data comprises first position information of each point cloud point in the first point cloud data relative to the vehicle;

and performing data rendering processing on the target point cloud data to obtain a target video slice and storing the target video slice.

According to a second aspect of embodiments of the present disclosure, there is provided a data processing apparatus comprising:

the first point cloud data receiving module is used for receiving first point cloud data from the laser radar;

a position determination module for determining relative position information of the lidar and a vehicle on which the lidar is mounted;

the target point cloud data determining module is used for determining target point cloud data according to the first point cloud data and the relative position information of the laser radar and the vehicle, and the target point cloud data comprises first position information of each point cloud point in the first point cloud data relative to the vehicle;

and the data rendering module is used for performing data rendering processing on the target point cloud data to obtain and store a target video slice.

According to a third aspect of embodiments of the present disclosure, there is provided an electronic apparatus including:

a processor, a memory and a computer program stored on the memory and executable on the processor, the processor implementing the aforementioned data processing method when executing the program.

According to a fourth aspect of embodiments of the present disclosure, there is provided a readable storage medium, wherein instructions, when executed by a processor of an electronic device, enable the electronic device to perform the aforementioned data processing method.

The embodiment of the disclosure provides a data processing method, a data processing device, an electronic device and a readable storage medium, wherein the method comprises the following steps: receiving first point cloud data from a laser radar; determining relative position information of the laser radar and a vehicle on which the laser radar is mounted; determining target point cloud data according to the first point cloud data and the relative position information of the laser radar and the vehicle, wherein the target point cloud data comprises first position information of each point cloud point in the first point cloud data relative to the vehicle; and performing data rendering processing on the target point cloud data to obtain a target video slice and storing the target video slice.

According to the embodiment of the disclosure, the cloud point data acquired by the laser radar can be subjected to data processing through the cloud server, and compared with a complex processing process of executing the cloud point data by a client, the time cost of downloading the cloud point data by the client is saved, the data processing time delay is reduced, and the data processing efficiency is improved; in addition, the data processing method provided by the disclosure reduces the performance requirement on the client, the client does not execute complex data processing on the point cloud data any more, and only needs to directly download the video slice from the cloud server for playing, and compared with the method that a complete video is generated for the client to download all videos and then play, the method can reduce the playing time delay of the point cloud data by providing the video slice; the video slices can support the video playing requirements of multiple scenes, for example, a client can download and play the video slices in real time, and can skip some video slices and only download and play the required video slices, so that the flexibility and the real-time performance of downloading and playing point cloud data by the client are improved, and the real-time perception of environmental information around a vehicle is facilitated.

Drawings

In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings needed to be used in the description of the embodiments of the present disclosure will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.

FIG. 1 shows a flow diagram of data processing method steps in one embodiment of the present disclosure;

FIG. 2 shows a schematic rectangular coordinate system with lidar as an origin in an embodiment of the disclosure;

FIG. 3a shows a schematic diagram of a lidar coordinate system and a vehicle coordinate system in one embodiment of the disclosure;

FIG. 3b is a schematic diagram illustrating the result of coordinate system translation in one embodiment of the present disclosure;

FIG. 3c is a schematic diagram illustrating the coordinate system rotation results in one embodiment of the present disclosure;

FIG. 4 shows a block diagram of a data processing apparatus in an embodiment of the present disclosure;

FIG. 5 shows a block diagram of an electronic device in an embodiment of the disclosure.

Detailed Description

Technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are some, but not all, of the embodiments of the present disclosure. All other embodiments, which can be obtained by a person skilled in the art without making creative efforts based on the embodiments of the present disclosure, belong to the protection scope of the embodiments of the present disclosure.

Example one

Referring to fig. 1, a flowchart illustrating steps of a data processing method in an embodiment of the present disclosure is shown, specifically as follows:

step 101, receiving first point cloud data from a laser radar.

Step 102, determining relative position information of the laser radar and a vehicle on which the laser radar is installed.

Step 103, determining target point cloud data according to the first point cloud data and the relative position information of the laser radar and the vehicle, wherein the target point cloud data comprises first position information of each point cloud point in the first point cloud data relative to the vehicle.

And 104, performing data rendering processing on the target point cloud data to obtain a target video slice and storing the target video slice.

The data processing method provided by the embodiment of the disclosure is applied to a cloud server. The cloud server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as cloud service, a cloud database, cloud computing, a cloud function, cloud storage, cloud communication, Network service, middleware service, Content Delivery Network (CDN), big data and an artificial intelligence platform.

The laser radar may include a mechanical laser radar, a solid state laser radar, and the like. A plurality of laser radars can be installed on the same vehicle, and the embodiment of the disclosure does not limit the type and the number of the laser radars and the installation positions on the vehicle. In the embodiment provided by the disclosure, the laser radar may collect point cloud data corresponding to the surrounding environment of the vehicle, which may include static obstacles, dynamic obstacles, lanes, and the like around the vehicle. Here, the description will be given taking a mechanical laser radar as an example. The mechanical laser radar vertically arranges the line beams of the laser scanning lines to form a surface, and the surface is rotated by the mechanical rotating part to scan the surrounding environment of the vehicle in the advancing process. Each rotation may generate a three-dimensional stereo image, which may be referred to as point cloud data.

It should be noted that, in the embodiment of the present disclosure, the vehicle on which the laser radar is installed may include a general vehicle and an unmanned device, and the unmanned device mainly includes an intelligent unmanned device such as an unmanned vehicle (an autonomous vehicle) and an unmanned aerial vehicle. A Positioning module for determining a current position of the vehicle, such as a Global Positioning System (GPS), may be mounted on the vehicle. In addition, the vehicle can be provided with an image sensor, and the current position of the vehicle can be determined in a computer vision mode. The embodiment of the present disclosure does not specifically limit how the current position of the vehicle is determined.

In the embodiment of the disclosure, a cloud server (hereinafter referred to as a server) acquires point cloud data generated by a laser radar at a current moment, that is, first point cloud data in the disclosure, and then determines first position information of each point cloud point relative to a vehicle based on a relative position between the laser radar and the vehicle on which the laser radar is installed, for each point cloud point included in the first point cloud data.

Wherein the relative position between the lidar and a vehicle on which the lidar is mounted may be determined based on the mounting position of the lidar on the vehicle and the current position of the vehicle. A Positioning module, such as a Global Positioning System (GPS), may be installed on the vehicle to determine a current position of the vehicle. In addition, the vehicle can be provided with an image sensor, and the current position of the vehicle can be determined in a computer vision mode. The embodiment of the present disclosure does not specifically limit how the current position of the vehicle is determined.

In addition, the vehicle may further include a communication device, for example, a Wireless Fidelity (WiFi), a bluetooth module, and the like, and the communication device is configured to send information, such as first point cloud data acquired by the lidar, installation information of the lidar, and a current position of the vehicle, to the server, and the server determines, according to the received information, such as the first point cloud data, the installation information of the lidar, and the current position of the vehicle, relative position information between the lidar and the vehicle on which the lidar is installed, and further determines first position information, relative to the vehicle, of each point cloud point in the first point cloud data, and obtains target point cloud data.

Of course, communication equipment can also be installed in the laser radar, the laser radar sends the first point cloud data collected by the laser radar to the server, and the vehicle only needs to send the installation information of the laser radar of the server and the current position and other information of the vehicle. The embodiment of the present disclosure does not specifically limit the way for the server to obtain each item of data.

The point cloud data is three-dimensional data, and when the point cloud data is displayed by a client, two-dimensional data is usually displayed on a screen of the client, so that the three-dimensional data is converted into the two-dimensional data by performing data rendering processing on target point cloud data, and a target video slice is generated and stored for the client to download and play. The target video slices can be stored according to any standard of standard definition, high definition, super definition, blue light and the like. No matter which standard is used for storing the target video slices, the data volume of the target video slices is far smaller than that of original point cloud data (namely, first point cloud data in the disclosure) acquired by a laser radar, and compared with the prior art that the original point cloud data is directly downloaded by a client and a series of data processing processes are carried out to obtain the point cloud video, in the embodiment of the disclosure, the client only needs to download the target video slices stored by a server, so that the data download amount of the client is greatly reduced, the data processing time delay is reduced, and the data processing efficiency is improved. In addition, the client does not execute the processing operation on the point cloud data any more, and only needs to have a video playing function, so that the performance requirement on the client is reduced.

In addition, compared with the method that a complete video is generated and is played after all videos are downloaded by a client, the method and the system can reduce the playing time delay of the point cloud data by providing the video slice; the video slices can support the video playing requirements of multiple scenes, for example, a client can download and play the video slices in real time, and can skip some video slices and only download and play the required video slices, so that the flexibility and the real-time performance of downloading and playing the point cloud data by the client are improved.

In an optional embodiment of the present disclosure, the performing data rendering processing on the target point cloud data in step 104 to obtain and store a target video slice includes:

step S11, respectively performing data rendering processing on the target point cloud data based on a first preset visual angle and a second preset visual angle to obtain a first target video corresponding to the first preset visual angle and a second target video corresponding to the second preset visual angle;

step S12, respectively slice the first target video and the second target video to obtain a first target video slice and a second target video slice corresponding to the first preset view angle, and store the first target video slice and the second target video slice.

The first preset angle and the second preset angle are observation angles of cloud points of each point in the point cloud data collected by the laser radar. The first preset angle and the second preset angle act together to cover most of use scenes of the laser radar and obtain relatively complete point cloud data for each point cloud point. For example, the first preset angle may be a top view angle or a following angle. If the first preset angle is a overlook angle, the second preset angle is a car following angle; on the contrary, if the first preset angle is the following angle, the second preset angle is the overlooking angle.

If the point cloud data acquired by the laser radar data are downloaded by the client and are subjected to data processing, such as data rendering, different clients can obtain different rendering results after respective data rendering, and the generated video contents may also be different aiming at the same point cloud point at the same visual angle, so that data deviation occurs when the same point cloud data are displayed by each client, and the data analysis of the point cloud data is not facilitated. Therefore, in the disclosed embodiment, the server performs uniform rendering processing on the point cloud data, so that the consistency of video slices acquired by different clients for the same point cloud point at the same view angle is ensured, and further data analysis on the point cloud data is facilitated.

In an optional embodiment of the present disclosure, the determining, according to the first point cloud data and the relative position information of the laser radar and the vehicle, target point cloud data in step 103 includes:

step S21, analyzing the first point cloud data to obtain second point cloud data, wherein the second point cloud data comprises second position information of each point cloud point in the first point cloud data relative to the laser radar;

and step S22, determining target point cloud data according to the relative position information of the laser radar and the vehicle and the second point cloud data.

The first point cloud data collected by the laser radar comprises time, coordinate information, color of pixel points scanned by the camera, classification value, intensity value and the like. The data formats of the laser radars produced by different manufacturers may be different, and the first point cloud data is analyzed according to the data formats of the laser radars.

Taking Pandar64 laser radar as an example, the communication Protocol adopted by the data output is UDP/IP (User data program/Internet Protocol, User Datagram Protocol/network Protocol), the generated data packet can be divided into a point cloud data UDP packet and a GPS data UDP packet, and each UDP packet consists of an ethernet packet header and UDP data. The effective point cloud data is 1194 bytes in total and comprises a Header, a Body and a Tail, and the embodiment of the disclosure mainly analyzes the Body data of the point cloud data. According to the data format of Body data, analyzing the original byte data into corresponding field values, and calculating second position information of each point cloud point relative to the laser radar according to the corresponding field values.

In an optional embodiment of the present disclosure, the parsing the first point cloud data in step S21 to obtain second point cloud data includes:

substep S211, analyzing the first point cloud data to obtain target azimuth data and target unit data of each point cloud point in the first point cloud data;

substep S212, determining the horizontal angle offset and the vertical angle of the laser radar;

step S213, determining the horizontal angle of each point cloud point relative to the laser radar according to the horizontal angle of the laser radar and the target azimuth data of each point cloud point;

step S214, calculating the product of the target unit data of each point cloud point and a preset constant to obtain the distance measurement distance between each point cloud point and the laser radar;

and a substep S215 of determining second point cloud data according to the vertical angle of the laser radar, the horizontal angle of each point cloud point relative to the laser radar, and the ranging distance between each point cloud point and the laser radar.

After the first point cloud data collected by the laser radar is analyzed, the obtained field value generally comprises target azimuth data and target unit data of each point cloud point in the first point cloud data. Still taking the Pandar64 lidar as an example, in the Body data of Pandar64, Azimuth of Block is the target Azimuth data in the present disclosure, and the Unit value is the target Unit data in the present disclosure. Adding Azimuth to the horizontal angle offset of the laser radar to obtain the horizontal angle of the point cloud point relative to the laser radar; and multiplying the Unit numerical value by a preset constant to obtain the ranging distance between the point cloud point and the laser radar. The horizontal angle offset, the vertical angle and the preset constant of the laser radar are related to the product specification of the laser radar.

According to the vertical angle of the laser radar, the horizontal angle of the point cloud point relative to the laser radar and the ranging distance between the point cloud point and the laser radar, second position information of the point cloud point relative to the laser radar can be determined, and second point cloud data can also be determined.

In an optional embodiment of the present disclosure, the determining the second point cloud data according to the vertical angle of the lidar, the horizontal angle of each point cloud point relative to the lidar, and the ranging distance between each point cloud point and the lidar in sub-step S215 includes:

p11, determining a polar coordinate value of each point cloud point in a polar coordinate system with the laser radar as an origin according to the vertical angle of the laser radar, the horizontal angle of each point cloud point relative to the laser radar and the ranging distance between each point cloud point and the laser radar;

and P12, converting the polar coordinate value of each point cloud point into a rectangular coordinate value to obtain second point cloud data.

Assuming that the vertical angle of the laser radar is theta, the horizontal angle of the point cloud point M relative to the laser radar is thetaThe distance between the point cloud point M and the laser radar is r, and the vertical angle theta and the horizontal angle of the point cloud point M relative to the laser radar are based on the laser radarThe distance r between the point cloud point M and the laser radar can obtain the polar seat with the point cloud point M as the origin pointPolar values in the system:

referring to fig. 2, a schematic diagram of a rectangular coordinate system with a laser radar as an origin is shown, according to a corner relationship between a cloud point M and the origin (laser radar), a trigonometric function operation is performed on a polar coordinate value of the cloud point M, so as to obtain a rectangular coordinate value (x1, y1, z1) of the cloud point M in the rectangular coordinate system with the laser radar as the origin, and a specific calculation process is as follows:

z1=r·cosθ (3)

wherein x1 is the projection value of the point cloud point M on the x-axis of the rectangular coordinate system shown in fig. 2, y1 is the projection value of the point cloud point M on the y-axis, and z1 is the projection value of the point cloud point M on the z-axis.

It should be noted that the above equations (1) to (3) are merely exemplary illustrations of the present disclosure, and do not constitute a limitation on the calculation method of the rectangular coordinate values of the point cloud points, and the specific calculation method varies according to the different coordinate systems and the different positional relationships that are constructed, and a specific analysis is required in the specific calculation.

According to the method from the step P11 to the step P12, the polar coordinate values of the cloud points of the first point cloud data are sequentially converted into rectangular coordinate values, so that the second point cloud data can be obtained.

In an optional embodiment of the present disclosure, the determining, in step S22, target point cloud data according to the relative position information of the laser radar and the vehicle and the second point cloud data includes:

and carrying out translation and rotation processing on the second position information of each point cloud point in the second point cloud data based on the relative position information of the laser radar and the vehicle to obtain target point cloud data.

In order to acquire panoramic information around a vehicle, a plurality of laser radars are generally installed on the vehicle and used for acquiring point cloud data in different directions, and then, for a same point cloud point, a plurality of point cloud data are obtained.

Referring to fig. 3a, a schematic diagram of a laser radar coordinate system and a vehicle coordinate system provided by the embodiment of the disclosure is shown. The black bold coordinate system is a vehicle coordinate system (taking the vehicle as the origin of coordinates), and the other coordinate system positioned at the head of the vehicle is a laser radar coordinate system (taking the laser radar arranged at the head of the vehicle as the origin of coordinates). Firstly, the laser radar coordinate systems are subjected to translation processing, so that the coordinate origins of the two coordinate systems coincide, and a coordinate system translation result schematic diagram shown in fig. 3b is obtained. Mapping the coordinate value of the x axis or the y axis of the point cloud point M in the laser radar coordinate system into the vehicle coordinate system to obtain the coordinate value of the y axis or the x axis of the point cloud point M in the vehicle coordinate system; then, the laser radar coordinate system is rotated to obtain a coordinate system rotation result schematic diagram as shown in fig. 3c, and coordinate value mapping is further performed to obtain a coordinate value of the cloud point M on another coordinate axis of the vehicle coordinate system. The coordinate values (x0, y0, z0) of the cloud point M in the vehicle coordinate system may be specifically expressed as:

x0=x1cosα-y1sinα (4)

y0=x1sinα+y1cosα (5)

wherein α is an x-axis included angle between the laser radar coordinate system and the vehicle coordinate system in fig. 3b and 3 c.

The above coordinate transformation process of fig. 3a to 3c is to rotate the lidar coordinate system around the z-axis, so that the z-axis coordinate of the cloud point M before and after rotation remains unchanged, that is:

z0=z1 (6)

also, the above equations (4) to (6) are only an exemplary illustration of the coordinate system conversion process shown in fig. 3a to 3c, and do not constitute a limitation on the embodiments of the present disclosure. In practical application, the specific calculation process of the first position information of the point cloud point relative to the vehicle is determined according to the second position information of the point cloud point, namely the translation and rotation processing of the laser radar coordinate system.

In an optional embodiment of the present disclosure, after determining target point cloud data according to the first point cloud data and the relative position information of the lidar and the vehicle in step 103, the method further comprises:

step S31, determining the motion information of the vehicle;

step S32, performing motion compensation processing on the target point cloud data according to the motion information of the vehicle to obtain the target point cloud data after motion compensation;

step 104, performing data rendering processing on the target point cloud data to obtain and store a target video slice, including:

and step S33, performing data rendering processing on the target point cloud data after motion compensation to obtain a target video slice and storing the target video slice.

In the process of collecting point cloud data by the laser radar, the laser radar is usually in a rotating state, the surrounding environment information of the vehicle is obtained through continuous 360-degree rotation, and in the process, the vehicle is usually in a moving state, so that certain distance deviation and angle deviation exist between each point cloud data collected after the laser radar rotates 360 degrees. In order to ensure the accuracy of the processing result of the point cloud data, it is necessary to eliminate data deviation caused by vehicle motion, that is, to perform motion compensation on the calculated target point cloud data. Specifically, the coordinate origin of the vehicle coordinate system is continuously displaced due to the movement of the vehicle, so that the relative origin of a point laser radar in the rotation process is determined, then the vehicle coordinate system corresponding to each rotation angle of the laser radar is subjected to translation and rotation processing, so that the coordinate origin of each vehicle coordinate system is coincided with the relative origin, and then the coordinate values of each point cloud point in each vehicle coordinate system in the coordinate system corresponding to the relative origin are calculated to obtain the target point cloud data after the movement compensation. The processing procedure for performing translation and rotation on each vehicle coordinate system based on the relative origin is similar to the processing procedure shown in fig. 3a to 3c, and further details are not repeated herein in the embodiments of the present disclosure.

The data rendering processing generally includes processing steps such as vertex processing, rasterization, fragment synthesis, and output synthesis, and may be implemented by using an existing data rendering technology, which is not limited in this embodiment of the present disclosure.

In summary, the embodiment of the disclosure can perform data processing on the point cloud data acquired by the laser radar through the cloud server, and compared with a complex processing process of executing the point cloud data by the client, the time cost for downloading the point cloud data by the client is saved, the data processing time delay is reduced, and the data processing efficiency is improved; in addition, the data processing method provided by the disclosure reduces the performance requirement on the client, the client does not execute complex data processing on the point cloud data any more, and only needs to directly download the video slice from the cloud server for playing, and compared with the method that a complete video is generated for the client to download all videos and then play, the method can reduce the playing time delay of the point cloud data by providing the video slice; the video slices can support the video playing requirements of multiple scenes, for example, a client can download and play the video slices in real time, and can skip some video slices and only download and play the required video slices, so that the flexibility and the real-time performance of downloading and playing point cloud data by the client are improved, and the real-time perception of environmental information around a vehicle is facilitated.

Example two

Referring to fig. 4, a block diagram of a data processing apparatus in an embodiment of the present disclosure is shown, which is as follows:

a first point cloud data receiving module 401, configured to receive first point cloud data from a laser radar;

a position determination module 402 for determining relative position information of the lidar and a vehicle on which the lidar is mounted;

a target point cloud data determining module 403, configured to determine target point cloud data according to the first point cloud data and the relative position information of the laser radar and the vehicle, where the target point cloud data includes first position information of each point cloud point in the first point cloud data relative to the vehicle;

and a data rendering module 404, configured to perform data rendering processing on the target point cloud data, obtain a target video slice, and store the target video slice.

Optionally, the data rendering module includes:

the first rendering sub-module is used for respectively performing data rendering processing on the target point cloud data based on a first preset visual angle and a second preset visual angle to obtain a first target video corresponding to the first preset visual angle and a second target video corresponding to the second preset visual angle;

and the slicing processing submodule is used for respectively carrying out slicing processing on the first target video and the second target video to obtain and store a first target video slice and a second target video slice corresponding to the first preset visual angle.

Optionally, the target point cloud data determining module includes:

the analysis processing sub-module is used for analyzing the first point cloud data to obtain second point cloud data, and the second point cloud data comprises second position information of each point cloud point in the first point cloud data relative to the laser radar;

and the target point cloud data determining submodule is used for determining the target point cloud data according to the relative position information of the laser radar and the vehicle and the second point cloud data.

Optionally, the apparatus further comprises:

a motion information determination module for determining motion information of the vehicle;

the motion compensation module is used for carrying out motion compensation processing on the target point cloud data according to the motion information of the vehicle to obtain the target point cloud data after motion compensation;

the data rendering module includes:

and the second rendering submodule is used for performing data rendering processing on the target point cloud data after motion compensation to obtain a target video slice and storing the target video slice.

Optionally, the parsing processing sub-module includes:

the analysis processing unit is used for analyzing the first point cloud data to obtain target azimuth data and target unit data of each point cloud point in the first point cloud data;

a first angle determination unit for determining a horizontal angle offset and a vertical angle of the laser radar;

the second angle determining unit is used for determining the horizontal angle of each point cloud point relative to the laser radar according to the horizontal angle of the laser radar and the target azimuth data of each point cloud point;

the distance measurement distance determining unit is used for calculating the product of target unit data of each point cloud point and a preset constant to obtain the distance measurement distance between each point cloud point and the laser radar;

and the second point cloud data determining unit is used for determining second point cloud data according to the vertical angle of the laser radar, the horizontal angle of each point cloud point relative to the laser radar and the ranging distance between each point cloud point and the laser radar.

Optionally, the second point cloud data determining unit includes:

the polar coordinate determining subunit is used for determining the polar coordinate value of each point cloud point in a polar coordinate system taking the laser radar as an origin according to the vertical angle of the laser radar, the horizontal angle of each point cloud point relative to the laser radar and the ranging distance between each point cloud point and the laser radar;

and the coordinate conversion subunit is used for converting the polar coordinate value of each point cloud point into a rectangular coordinate value to obtain second point cloud data.

Optionally, the target point cloud data determining sub-module includes:

and the target point cloud data determining unit is used for carrying out translation and rotation processing on the second position information of each point cloud point in the second point cloud data based on the relative position information of the laser radar and the vehicle to obtain the target point cloud data.

In summary, the embodiment of the disclosure can perform data processing on the point cloud data acquired by the laser radar through the cloud server, and compared with a complex processing process of executing the point cloud data by the client, the time cost for downloading the point cloud data by the client is saved, the data processing time delay is reduced, and the data processing efficiency is improved; in addition, the data processing method provided by the disclosure reduces the performance requirement on the client, the client does not execute complex data processing on the point cloud data any more, and only needs to directly download the video slice from the cloud server for playing, and compared with the method that a complete video is generated for the client to download all videos and then play, the method can reduce the playing time delay of the point cloud data by providing the video slice; the video slices can support the video playing requirements of multiple scenes, for example, a client can download and play the video slices in real time, and can skip some video slices and only download and play the required video slices, so that the flexibility and the real-time performance of downloading and playing point cloud data by the client are improved, and the real-time perception of environmental information around a vehicle is facilitated.

The second embodiment is an embodiment of the apparatus corresponding to the first embodiment, and the detailed description may refer to the first embodiment, which is not repeated herein.

An embodiment of the present disclosure also provides an electronic device, referring to fig. 5, including: a processor 501, a memory 502 and a computer program 5021 stored on the memory 502 and executable on the processor, the processor 501 implementing the data processing method of the foregoing embodiments when executing the program.

Embodiments of the present disclosure also provide a readable storage medium, in which instructions, when executed by a processor of an electronic device, enable the electronic device to perform the data processing method of the foregoing embodiments.

For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.

The algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing such a system will be apparent from the description above. In addition, embodiments of the present disclosure are not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the embodiments of the present disclosure as described herein, and any descriptions of specific languages are provided above to disclose the best modes of the embodiments of the present disclosure.

In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the disclosure may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.

Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the disclosure, various features of the embodiments of the disclosure are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that is, claimed embodiments of the disclosure require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of an embodiment of this disclosure.

Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.

The various component embodiments of the disclosure may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functions of some or all of the components in a document processing apparatus according to embodiments of the present disclosure. Embodiments of the present disclosure may also be implemented as an apparatus or device program for performing a portion or all of the methods described herein. Such programs implementing embodiments of the present disclosure may be stored on a computer readable medium or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.

It should be noted that the above-mentioned embodiments illustrate rather than limit embodiments of the disclosure, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. Embodiments of the disclosure may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.

It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.

The above description is only for the purpose of illustrating the preferred embodiments of the present disclosure and is not to be construed as limiting the embodiments of the present disclosure, and any modifications, equivalents, improvements and the like that are made within the spirit and principle of the embodiments of the present disclosure are intended to be included within the scope of the embodiments of the present disclosure.

The above description is only a specific implementation of the embodiments of the present disclosure, but the scope of the embodiments of the present disclosure is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the embodiments of the present disclosure, and all the changes or substitutions should be covered by the scope of the embodiments of the present disclosure. Therefore, the protection scope of the embodiments of the present disclosure shall be subject to the protection scope of the claims.

18页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种视频字幕匹配程度的确定方法、装置及电子设备

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!