Method and device for processing point cloud

文档序号:1220300 发布日期:2020-09-04 浏览:4次 中文

阅读说明:本技术 处理点云的方法和装置 (Method and device for processing point cloud ) 是由 徐彬 于 2019-07-30 设计创作,主要内容包括:提供了一种处理点云的方法,包括:获取第一点云;获取与所述第一点云对应相同环境的图像;对所述第一点云进行重采样,获得第二点云,所述第二点云的密度小于所述第一点云的密度;配准所述第二点云和所述图像;根据所述图像处理所述第二点云,生成包含颜色的目标点云。由于相机拍摄的图像的颜色能够反映真实的驾驶环境,因此,上述方案能够给驾驶员提供更加直观的可视化点云,便于驾驶员根据目标点云更好地执行倒车、变道以及超车等精细化操作。(A method of processing a point cloud is provided, comprising: acquiring a first point cloud; acquiring an image of the same environment corresponding to the first point cloud; resampling the first point cloud to obtain a second point cloud, wherein the density of the second point cloud is smaller than that of the first point cloud; registering the second point cloud and the image; and processing the second point cloud according to the image to generate a target point cloud containing colors. Because the color of the image shot by the camera can reflect the real driving environment, the scheme can provide more visual point clouds for the driver, and the driver can better execute the fine operations of backing, changing lanes, overtaking and the like according to the target point clouds.)

1. A method of processing point clouds applied to a movable platform comprising a point cloud sensor and a vision sensor, the method comprising:

acquiring a first point cloud;

acquiring an image of the same environment corresponding to the first point cloud;

resampling the first point cloud to obtain a second point cloud, wherein the density of the second point cloud is smaller than that of the first point cloud;

registering the second point cloud and the image;

and processing the second point cloud according to the image to generate a target point cloud containing colors.

2. The method of claim 1, wherein the resampling the first point cloud comprises:

determining a point cloud block in the first point cloud, wherein the point cloud density is greater than a density threshold;

and resampling the point cloud blocks.

3. The method of claim 1 or 2, wherein the processing the second point cloud from the image comprises:

determining edges of objects in the second point cloud from the image;

filtering the second point cloud according to the edge to obtain a second point cloud with increased density, wherein the second point cloud with increased density comprises the edge;

determining a color of the increased density second point cloud from the image.

4. The method of any of claims 1 to 3, further comprising:

and when the environment area corresponding to the target point cloud is larger than a preset environment area, deleting redundant point cloud blocks in the target point cloud, wherein the environment area corresponding to the redundant point cloud blocks is positioned outside the preset environment area.

5. The method of any of claims 1 to 4, further comprising:

identifying an object in the target point cloud;

replacing the object with a three-dimensional model.

6. The method of any of claims 1 to 4, further comprising:

and processing the target point cloud to generate a patch.

7. The method of claim 6, wherein the patch consists of three adjacent points in the target point cloud.

8. The method of any one of claims 1 to 7, further comprising:

and rendering the target point cloud.

9. The method of any one of claims 1 to 8, wherein the obtaining a first point cloud comprises:

acquiring a plurality of point clouds from a plurality of laser radars;

stitching the plurality of point clouds into the first point cloud.

10. The method of claim 9, wherein said stitching the plurality of point clouds into the first point cloud comprises:

and splicing the plurality of point clouds into the first point cloud according to the external parameters of the plurality of laser radars.

11. The method of claim 10, wherein the extrinsic parameter is indicative of a positional relationship of a coordinate system of each of the plurality of lidar relative to a coordinate system of a movable platform.

12. The method of claim 11, wherein the extrinsic parameters comprise:

rotation parameters of a coordinate system of each of the plurality of lidar relative to a coordinate system of the movable platform; and/or the presence of a gas in the gas,

translation parameters of a coordinate system of each of the plurality of lidar relative to a coordinate system of the movable platform.

13. The method of any of claims 9 to 12, wherein said stitching the plurality of point clouds into the first point cloud comprises:

and splicing the plurality of point clouds through an iterative closest point ICP algorithm to obtain the first point cloud.

14. An apparatus for processing point clouds, applied to a movable platform comprising a point cloud sensor and a vision sensor, the apparatus comprising a communication unit, a resampling unit, a registration unit and a processing unit,

the communication unit is configured to: acquiring a first point cloud; acquiring an image of the same environment corresponding to the first point cloud;

the resampling unit is configured to: resampling the first point cloud to obtain a second point cloud, wherein the density of the second point cloud is smaller than that of the first point cloud;

the registration unit is configured to: registering the second point cloud and the image;

the processing unit is configured to: and processing the second point cloud according to the image to generate a target point cloud containing colors.

15. The apparatus of claim 14, wherein the resampling unit is specifically configured to:

determining a point cloud block in the first point cloud, wherein the point cloud density is greater than a density threshold;

and resampling the point cloud blocks.

16. The apparatus according to claim 14 or 15, wherein the processing unit is specifically configured to:

determining edges of objects in the second point cloud from the image;

filtering the second point cloud according to the edge to obtain a second point cloud with increased density, wherein the second point cloud with increased density comprises the edge;

the determining a color of the increased density second point cloud from the image.

17. The apparatus according to any one of claims 14 to 16, wherein the processing unit is further configured to:

and when the environment area corresponding to the target point cloud is larger than a preset environment area, deleting redundant point cloud blocks in the target point cloud, wherein the environment area corresponding to the redundant point cloud blocks is positioned outside the preset environment area.

18. The apparatus according to any one of claims 14 to 17, wherein the processing unit is further configured to:

identifying an object in the target point cloud;

replacing the object with a three-dimensional model.

19. The apparatus according to any one of claims 14 to 17, wherein the processing unit is further configured to:

and processing the target point cloud to generate a patch.

20. The apparatus of claim 19, wherein the patch consists of three adjacent points in the target point cloud.

21. The apparatus according to any one of claims 14 to 20, wherein the processing unit is further configured to:

and rendering the target point cloud.

22. The apparatus of any one of claims 14 to 21,

the communication unit is specifically configured to: acquiring a plurality of point clouds from a plurality of laser radars;

the processing unit is further to: stitching the plurality of point clouds into the first point cloud.

23. The apparatus according to claim 22, wherein the processing unit is specifically configured to:

and splicing the plurality of point clouds into the first point cloud according to the external parameters of the plurality of laser radars.

24. The apparatus of claim 23, wherein the extrinsic parameter is indicative of a positional relationship of a coordinate system of each of the plurality of lidar relative to a coordinate system of the movable platform.

25. The apparatus of claim 24, wherein the extrinsic parameters comprise:

rotation parameters of a coordinate system of each of the plurality of lidar relative to a coordinate system of the movable platform; and/or the presence of a gas in the gas,

translation parameters of a coordinate system of each of the plurality of lidar relative to a coordinate system of the movable platform.

26. The apparatus according to any one of claims 22 to 25, wherein the processing unit is specifically configured to:

and splicing the plurality of point clouds through an iterative closest point ICP algorithm to obtain the first point cloud.

27. A movable platform, comprising: the system comprises a memory, a processor, a point cloud sensor and a visual sensor, wherein the memory is used for storing instructions, and the processor is used for executing the instructions stored by the memory and controlling the point cloud sensor to acquire a first point cloud according to the instructions; controlling the vision sensor to acquire an image of the same environment corresponding to the first point cloud;

the processor is further configured to perform, in accordance with the instructions:

resampling the first point cloud to obtain a second point cloud, wherein the density of the second point cloud is smaller than that of the first point cloud;

registering the second point cloud and the image;

and processing the second point cloud according to the image to generate a target point cloud containing colors.

28. The movable platform of claim 27, wherein the processor is configured to perform, in accordance with the instructions:

determining a point cloud block in the first point cloud, wherein the point cloud density is greater than a density threshold;

and resampling the point cloud blocks.

29. The movable platform of claim 27 or 28, wherein the processor is configured to perform, in accordance with the instructions:

determining edges of objects in the second point cloud from the image;

filtering the second point cloud according to the edge to obtain a second point cloud with increased density, wherein the second point cloud with increased density comprises the edge;

the determining a color of the increased density second point cloud from the image.

30. The movable platform of any one of claims 27-29, wherein the processor is configured to perform, in accordance with the instructions:

and when the environment area corresponding to the target point cloud is larger than a preset environment area, deleting redundant point cloud blocks in the target point cloud, wherein the environment area corresponding to the redundant point cloud blocks is positioned outside the preset environment area.

31. The movable platform of any one of claims 27-30, wherein the processor is configured to perform, in accordance with the instructions:

identifying an object in the target point cloud;

replacing the object with a three-dimensional model.

32. The movable platform of any one of claims 27-30, wherein the processor is configured to perform, in accordance with the instructions:

and processing the target point cloud to generate a patch.

33. The movable platform of claim 32, wherein the patch consists of three adjacent points in the target point cloud.

34. The movable platform of any one of claims 27-33, wherein the processor is configured to perform, in accordance with the instructions:

and rendering the target point cloud.

35. The movable platform of any one of claims 27-34, wherein the processor is configured to perform, in accordance with the instructions:

acquiring a plurality of point clouds from a plurality of point cloud sensors;

stitching the plurality of point clouds into the first point cloud.

36. The movable platform of claim 35, wherein the processor is configured to perform, in accordance with the instructions:

and splicing the plurality of point clouds into the first point cloud according to the external parameters of the plurality of laser radars.

37. The movable platform of claim 36, wherein the extrinsic parameter is indicative of a positional relationship of a coordinate system of each of the plurality of lidar with respect to a coordinate system of the movable platform.

38. The movable platform of claim 37, wherein the extrinsic parameters comprise:

rotation parameters of a coordinate system of each of the plurality of lidar relative to a coordinate system of the movable platform; and/or the presence of a gas in the gas,

translation parameters of a coordinate system of each of the plurality of lidar relative to a coordinate system of the movable platform.

39. The movable platform of any one of claims 35-38, wherein the processor is configured to perform, in accordance with the instructions:

and splicing the plurality of point clouds through an iterative closest point ICP algorithm to obtain the first point cloud.

40. The movable platform of any one of claims 27-39, wherein the point cloud sensor is a lidar and the vision sensor is a camera.

41. A computer storage medium, having stored thereon a computer program which, when executed by a computer, causes the computer to perform the method of any one of claims 1 to 13.

42. A computer program product comprising instructions which, when executed by a computer, cause the computer to perform the method of any one of claims 1 to 13.

Technical Field

The present application relates to the field of autopilot and, more particularly, to a method and apparatus for processing point clouds.

Background

A point cloud (point cloud) is an expression form of a three-dimensional object or a three-dimensional scene, and is composed of a group of randomly distributed discrete points in a space, where the discrete points are point cloud data, the point cloud data is used to represent a spatial structure of the three-dimensional object or the three-dimensional scene, and one point cloud data is generally composed of position information.

To facilitate driver awareness of the surroundings, the autonomous vehicle may present a point cloud to the driver, for example, displaying the point cloud on a screen or projecting the point cloud on a window. In order to make the driver know the surrounding environment more intuitively, one method of presenting the point cloud is to color the point cloud and then present it to the driver. However, the color of the point cloud obtained by directly coloring the point cloud has a large deviation from the color of the real environment.

Disclosure of Invention

The application provides a method for processing point clouds, which can provide more visual point clouds for drivers.

In a first aspect, a method of processing a point cloud is provided, comprising: acquiring a first point cloud; acquiring an image of the same environment corresponding to the first point cloud; resampling the first point cloud to obtain a second point cloud, wherein the density of the second point cloud is smaller than that of the first point cloud; registering the second point cloud and the image; and processing the second point cloud according to the image to generate a target point cloud containing colors.

Because the color of the image shot by the camera can reflect the real driving environment, the scheme can provide more visual point clouds for the driver, and the driver can better execute the fine operations of backing, changing lanes, overtaking and the like according to the target point clouds.

In a second aspect, an apparatus is provided for performing the method of the first aspect.

In a third aspect, an apparatus is provided that includes a memory to store instructions and a processor to execute the instructions stored in the memory, and execution of the instructions stored in the memory causes the processor to perform the method of the first aspect.

In a fourth aspect, a chip is provided, where the chip includes a processing module and a communication interface, the processing module is configured to control the communication interface to communicate with the outside, and the processing module is further configured to implement the method of the first aspect.

In a fifth aspect, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a computer, causes the computer to carry out the method of the first aspect. In particular, the computer may be the apparatus described above.

In a sixth aspect, there is provided a computer program product containing instructions which, when executed by a computer, cause the computer to carry out the method of the first aspect. In particular, the computer may be the apparatus described above.

Drawings

FIG. 1 is a schematic diagram of a method of processing a point cloud provided herein;

FIG. 2 is a schematic diagram of another method of processing a point cloud provided herein;

fig. 3 is a schematic diagram of an apparatus for processing a point cloud provided in the present application.

Detailed Description

The technical solutions in the embodiments of the present application will be described below with reference to the accompanying drawings.

Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein in the description of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application.

The point cloud processing method provided by the application can be applied to movable platforms including but not limited to autonomous vehicles. The automatic driving automobile is also called as an unmanned automobile, can acquire environmental information through sensors such as a radar and the like, controls the automobile to run according to the environmental information, and reduces the time for a driver to control the automobile, so that the automatic driving automobile has the advantages of reducing traffic violation and the like.

However, in the evolution process of the automatic driving technology, drivers need to participate in driving to different degrees, so that the driving environment is presented to the drivers through a visualization means, and the drivers can make decisions such as backing up, changing lanes and overtaking better.

It should be noted that in the present application, the driver may drive the movable platform inside the movable platform, for example, the driver controls the autonomous vehicle inside the autonomous vehicle; the driver may also drive the movable platform outside the movable platform, e.g. the driver controls the drone or the unmanned vehicle via a remote control.

To reduce the cost of visualization, the driving environment can be presented using existing sensors of the autonomous vehicle. For example, the point cloud generated by the laser radar may be processed and presented to the driver. Fig. 1 is a method for processing a point cloud provided in the present application. The method 100 comprises:

s110, acquiring a first point cloud.

S110 may be executed by a processor, such as a Central Processing Unit (CPU). The CPU can acquire the first point cloud shot by the laser radar through the communication interface, and the CPU can also process the point cloud shot by the laser radar and then generate the first point cloud.

S110 may also be performed by an autonomous vehicle comprising a processor and a lidar. After the point cloud is shot through the laser radar, the point cloud is transmitted to the processor in real time, so that the processor can conveniently execute the subsequent steps.

The generation of the first point cloud by the laser radar is only an example, and the device for generating the point cloud is not limited in the present application, and for example, the first point cloud may be generated by a millimeter wave radar.

The processor may obtain the first point cloud from one lidar or may obtain the first point cloud from a plurality of lidars. For example, the processor acquires a plurality of point clouds by a plurality of lidar; splicing the plurality of point clouds into a first point cloud; alternatively, the processor may register the plurality of point clouds (i.e., stitch the plurality of point clouds) by an Iterative Closest Point (ICP) algorithm, resulting in a first point cloud.

The processor may stitch the plurality of point clouds into a first point cloud according to the extrinsic parameters of the plurality of lidar. The external parameter is used for indicating the position relation of the coordinate system of each laser radar in the plurality of laser radars relative to the coordinate system of the movable platform. For example, the extrinsic parameters may include:

rotation parameters of a coordinate system of each of the plurality of lidar relative to a coordinate system of the movable platform; and/or the presence of a gas in the gas,

translation parameters of a coordinate system of each of the plurality of lidar relative to a coordinate system of the movable platform.

The processor may perform S120 after performing S110, or the processor may perform S110 and S120.

And S120, acquiring an image of the same environment corresponding to the first point cloud.

For example, the CPU acquires an image taken by a camera through the communication interface. S120 may also be performed by an autonomous vehicle that includes a camera of the processor. The camera may be a color camera or a grayscale camera. Accordingly, the image acquired by the processor may be a color image or a grayscale image.

The same environment where the image corresponds to the first point cloud can be interpreted as: the camera and the laser radar shoot two results obtained from the same scene at the same time; or the camera and the laser radar shoot two results obtained by the approximate scene at the same time; or the camera and the laser radar shoot two results obtained from the same scene at adjacent moments; alternatively, the camera and lidar capture two results of an approximate scene at adjacent times.

For example, the camera captures a lane right in front of the vehicle at time a, generating an image; shooting a lane right in front of the vehicle by the laser radar at the moment A to generate first point cloud; and if the scene corresponding to the image is completely the same as the scene corresponding to the first point cloud, the image corresponds to the same environment as the first point cloud.

For another example, the camera captures a lane right in front of the vehicle at time a to generate an image; shooting a lane in the left front of the vehicle by the laser radar at the moment A to generate first point cloud; and if the scene corresponding to the image is partially the same as the scene corresponding to the first point cloud, the image corresponds to the same environment as the first point cloud.

For another example, the camera captures a lane right in front of the vehicle at time a to generate an image; shooting a lane right in front of the vehicle by the laser radar at the moment B to generate first point cloud; and the A time is adjacent to the B time, and the scene corresponding to the image has a scene which is partially identical to the scene corresponding to the first point cloud, so that the image corresponds to the same environment as the first point cloud.

For another example, the camera captures a lane right in front of the vehicle at time a to generate an image; shooting a lane in the left front of the vehicle by the laser radar at the moment B to generate first point cloud; and the A time is adjacent to the B time, and the scene corresponding to the image has a scene which is partially identical to the scene corresponding to the first point cloud, so that the image corresponds to the same environment as the first point cloud.

For images with a high similarity of scene and the first point cloud, the processor may control the camera and the lidar to shoot at as close a time as possible and at as the same angle as possible.

S130, resampling is carried out on the first point cloud, and a second point cloud is obtained, wherein the density of the second point cloud is smaller than that of the first point cloud.

To reduce the load on the processor, the processor may resample (resampling) the first point cloud to reduce the density of the higher density point cloud blocks in the first point cloud. The first point cloud and the second point cloud are both colorless point clouds, and the colorless point cloud refers to a point cloud without gray information and RGB information.

When the processor resamples the first point cloud, the processor may resample the first point cloud according to the following steps:

determining a point cloud block of which the point cloud density is greater than a density threshold value in the first point cloud;

and resampling the point cloud blocks.

Some point cloud blocks with high density may exist in the first point cloud, and the processor may first identify the point cloud blocks and then resample the point cloud blocks, without resampling the entire first point cloud, so that the load of the processor may be reduced.

S140, registering the second point cloud and the image.

The processor may register the second point cloud and the image with calibration results of the camera and the lidar, which may be preconfigured information, e.g., external parameters between the preconfigured camera and the lidar.

The processor may also register the second point cloud and the image in other ways, for example, the processor may determine the same object in the second point cloud and the image, overlap the same object, and register the second point cloud and the image.

S150, processing the second point cloud according to the image to generate a target point cloud containing colors.

After registering the second point cloud and the image, the processor may process the second point cloud according to the image, i.e., color the second point cloud based on the color of the image, to generate a target point cloud including the color.

For example, the processor may determine an overlap of the image with the second point cloud, copy the color of the overlap in the image to the overlap of the second point cloud. Alternatively, the processor may determine the same object in the image as in the second point cloud, assigning the color of the object in the image to the corresponding object in the second point cloud.

Through the above steps, the processor finally generates a point cloud containing colors, i.e., a target point cloud. Because the color of the image shot by the camera can reflect the real driving environment, the scheme can provide more visual point clouds for the driver, and the driver can better execute the fine operations of backing, changing lanes, overtaking and the like according to the target point clouds.

Because the density of the second point cloud obtained after resampling is sparse, in order to improve the display effect of the second point cloud after visualization, the processor can process the second point cloud according to the following steps when processing the second point cloud according to the image:

determining edges of objects in the second point cloud according to the image;

filtering the second point cloud according to the edge to obtain a second point cloud with increased density, wherein the second point cloud with increased density comprises the edge;

the color of the increased density second point cloud is determined from the image.

The processor may process the second point cloud using an edge preserving filtering (edge preserving filter) algorithm to determine edges of objects in the second point cloud. For example, the processor may determine point cloud blocks in the second point cloud that correspond to the object in the image using a guided filter algorithm, thereby determining edges of the object in the second point cloud; filtering the point cloud in the edge to increase the density of the point cloud in the edge to obtain a second point cloud with increased density; then, based on the correspondence of the image and the second point cloud, the color of the second point cloud with increased density is determined. Finally, the target point cloud (i.e., the second point cloud with increased density of colors) with better visualization effect is obtained.

The object is, for example, an object in the driving environment, such as an adjacent automobile, an obstacle, etc. The visualization effect is better, and the details of the object can be richer.

The processor may present the target point cloud directly to the driver, or may present the target point cloud to the driver after processing based on the method described below.

For example, when the environment area corresponding to the target point cloud is larger than the preset environment area, the processor may further delete an unnecessary point cloud block in the target point cloud, where the environment area corresponding to the unnecessary point cloud block is located outside the preset environment area.

The preset environmental area is, for example, an area within 200 m in front of the movable platform and within 5 m in height. The oversize area has no great significance to the driver, but may disperse the attention of the driver, so the scheme can lead the attention of the driver to be concentrated on the driving environment near the movable platform, and improve the safety of the movable platform. In addition, displaying part of the content in the target point cloud can also reduce the burden of the processor.

In addition, in order to further improve the visualization effect of the target point cloud, the processor can also identify an object in the target point cloud; and replacing the object in the target point cloud with the three-dimensional model. For example, the processor may identify people, vehicles, and traffic lights in the target point cloud and determine the location of these objects, replacing the objects in the form of the point cloud with a more detailed 3D model.

The processor may also process the target point cloud to generate one or more patches (mesh grid), which may be composed of lines connecting three adjacent points in the target point cloud.

After the processor obtains the target point cloud, the target point cloud can be directly rendered, or the target point cloud after further processing can be rendered according to the method. For example, the processor may render the target point cloud according to a set viewing angle of a driver or a viewing angle preset by a program, and display the rendered target point cloud on a screen, thereby improving user experience.

Fig. 2 shows another schematic diagram of the method for processing a point cloud provided by the present application.

As shown in fig. 2, after acquiring a plurality of point cloud data from the laser radar, the processor merges the plurality of point cloud data into one point cloud (i.e., the first point cloud in the method 100), and resamples the point cloud to obtain a sparse point cloud.

And then, the processor acquires an image from the camera, and projects the sparse point cloud onto the image by using the calibration result of the camera and the laser radar to obtain the sparse point cloud of the superposed image.

Because the visualization effect of the sparse point cloud of the superposed image is poor, the sparse point cloud of the superposed image can be processed by utilizing an edge-preserving filtering algorithm, the 3d coordinates of the pixel points around the sparse point cloud are guessed by utilizing the consistency of the image and the sparse point cloud, and the sparse point cloud is filled by utilizing the pixel points to obtain the colored dense point cloud. For each point in the dense point cloud, the color information of the point is the pixel value of the pixel point corresponding to the point.

The processor may detect objects in the dense point cloud, for example, detect people, vehicles, traffic lights, and location information of these objects in the dense point cloud, and replace these objects with a 3d model in order to achieve better visualization.

And finally, the processor can render the point cloud according to the observation visual angle set by the driver or the observation visual angle automatically set by a program, and the point cloud is displayed on a screen.

Examples of methods of processing point clouds provided herein are described in detail above. It is understood that the point cloud processing device includes hardware structures and/or software modules for performing the above functions. Those of skill in the art would readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.

The point cloud processing device can be divided into functional units according to the method example, for example, each function can be divided into each functional unit, or two or more functions can be integrated into one functional unit. The functional units can be realized in a hardware form or a software form. It should be noted that the division of the units in the present application is schematic, and is only one division of logic functions, and there may be another division manner in actual implementation.

Fig. 3 shows a schematic structural diagram of a point cloud processing device provided by the present application. The dashed lines in fig. 3 indicate that the unit is an optional unit. The apparatus 300 may be used to implement the methods described in the method embodiments above. The apparatus 300 may be a software module, a chip, a terminal device, or other electronic device.

The apparatus 300 comprises one or more processing units 301, and the one or more processing units 301 may support the apparatus 300 to implement the method in the method embodiment corresponding to fig. 1. The processing unit 301 may be a software processing unit, a general purpose processor, or a special purpose processor. The processing unit 301 may be used to control the apparatus 300, execute a software program (e.g., a software program comprising the method 100), process data (e.g., process the first point cloud). The apparatus 300 may further include a communication unit 305 to enable input (reception) and output (transmission) of signals.

For example, the apparatus 300 may be a software module, and the communication unit 305 may be an interface function of the software module. The software modules may run on a processor or control circuitry.

Also for example, the apparatus 300 may be a chip, and the communication unit 305 may be an input and/or output circuit of the chip, or the communication unit 305 may be a communication interface of the chip, and the chip may be a component of a terminal device or other electronic devices.

In the apparatus 300, the communication unit 305 may perform: acquiring a first point cloud; acquiring an image of the same environment corresponding to the first point cloud;

the processing unit 301 may perform: resampling the first point cloud to obtain a second point cloud, wherein the density of the second point cloud is smaller than that of the first point cloud; registering the second point cloud and the image; and processing the second point cloud according to the image to generate a target point cloud containing colors.

It should be noted that the processing unit 301 may also include a resampling unit and a configuration unit, where the resampling unit is configured to resample the first point cloud, and the configuration unit is configured to register the second point cloud and the image.

Optionally, the processing unit 301 is specifically configured to: determining a point cloud block in the first point cloud, wherein the point cloud density is greater than a density threshold; and resampling the point cloud blocks.

Optionally, the processing unit 301 is specifically configured to: determining edges of objects in the second point cloud from the image; filtering the second point cloud according to the edge to obtain a second point cloud with increased density, wherein the second point cloud with increased density comprises the edge; the determining a color of the increased density second point cloud from the image.

Optionally, the processing unit 301 is specifically configured to: and when the environment area corresponding to the target point cloud is larger than a preset environment area, deleting redundant point cloud blocks in the target point cloud, wherein the environment area corresponding to the redundant point cloud blocks is positioned outside the preset environment area.

Optionally, the processing unit 301 is specifically configured to: identifying an object in the target point cloud; replacing the object with a three-dimensional model.

Optionally, the processing unit 301 is further configured to: and processing the target point cloud to generate a patch.

Optionally, the patch consists of three adjacent points in the target point cloud.

Optionally, the processing unit 301 is further configured to: and rendering the target point cloud.

Optionally, the communication unit 305 is specifically configured to: acquiring a plurality of point clouds from a plurality of laser radars; the processing unit 301 is further configured to: stitching the plurality of point clouds into the first point cloud.

Optionally, the processing unit 301 is specifically configured to: and splicing the plurality of point clouds into the first point cloud according to the external parameters of the plurality of laser radars.

Optionally, the extrinsic parameter is used to indicate a positional relationship of a coordinate system of each of the plurality of lidar relative to a coordinate system of the movable platform.

Optionally, the external parameters include:

rotation parameters of a coordinate system of each of the plurality of lidar relative to a coordinate system of the movable platform; and/or the presence of a gas in the gas,

translation parameters of a coordinate system of each of the plurality of lidar relative to a coordinate system of the movable platform.

Optionally, the processing unit 301 is specifically configured to: and splicing the plurality of point clouds through an ICP algorithm to obtain the first point cloud.

It will be clear to those skilled in the art that for the convenience and brevity of description, the specific operation and resulting effect of the above-described apparatus and units can be seen in the associated description of the embodiment of fig. 1. For brevity, no further description is provided herein.

As an alternative, the above steps may be performed by logic circuits in the form of hardware or instructions in the form of software. For example, the processing unit 301 may be a Central Processing Unit (CPU), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), or other programmable logic device, such as a discrete gate, a transistor logic device, or a discrete hardware component.

The apparatus 300 may comprise one or more storage units 302, in which a program 304 (e.g. a software program including the method 100) is stored, the program 304 being executable by the processing unit 301 to generate instructions 303, such that the processing unit 301 performs the method described in the above method embodiments according to the instructions 303. Optionally, the storage unit 302 may further store data (e.g., a first point cloud) therein. Alternatively, processing unit 301 may also read data stored in storage unit 302, where the data may be stored at the same memory address as program 304, or the data may be stored at a different memory address from program 304.

The processing unit 301 and the storage unit 302 may be separately disposed, or may be integrated together, for example, on a single board or a System On Chip (SOC).

The present application also provides a computer program product which, when executed by the processing unit 301, implements the method according to any of the embodiments of the present application.

The computer program product may be stored in the storage unit 302, for example, as a program 304, and the program 304 is finally converted into an executable object file capable of being executed by the processing unit 301 through preprocessing, compiling, assembling, linking and other processing procedures.

The computer program product may be transmitted from one computer readable storage medium to another computer readable storage medium, for example, from one website, computer, server, or data center to another website, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.).

The present application also provides a computer-readable storage medium (e.g., storage unit 302) having a computer program stored thereon, which when executed by a computer implements the method of any of the embodiments of the present application. The computer program may be a high-level language program or an executable object program.

The computer readable storage medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., Digital Video Disk (DVD)), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others. For example, the computer-readable storage medium may be volatile memory or nonvolatile memory, or the computer-readable storage medium may include both volatile and nonvolatile memory. The non-volatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable EPROM (EEPROM), or a flash memory. Volatile memory can be Random Access Memory (RAM), which acts as external cache memory. By way of example, but not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic Random Access Memory (SDRAM), double data rate SDRAM, enhanced SDRAM, SLDRAM, Synchronous Link DRAM (SLDRAM), and direct rambus RAM (DR RAM).

It should be understood that, in the embodiments of the present application, the sequence numbers of the processes do not mean the execution sequence, and the execution sequence of the processes should be determined by the functions and the inherent logic of the processes, and should not constitute any limitation to the implementation process of the embodiments of the present application.

The term "and/or" herein is merely an association relationship describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.

The system, apparatus and method disclosed in the embodiments provided in the present application can be implemented in other ways. For example, some features of the method embodiments described above may be omitted, or not performed. The above-described embodiments of the apparatus are merely exemplary, the division of the unit is only one logical function division, and there may be other division ways in actual implementation, and a plurality of units or components may be combined or integrated into another system. In addition, the coupling between the units or the coupling between the components may be direct coupling or indirect coupling, and the coupling includes electrical, mechanical or other connections.

In short, the above description is only a part of the embodiments of the present application, and is not intended to limit the scope of the present application. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

15页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:图像处理方法和图像处理系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!