3D structured light module and depth map point cloud image acquisition method based on same

文档序号:1888263 发布日期:2021-11-26 浏览:15次 中文

阅读说明:本技术 一种3d结构光模组及基于该模组的深度图点云图获取方法 (3D structured light module and depth map point cloud image acquisition method based on same ) 是由 陶松 于 2021-08-11 设计创作,主要内容包括:本发明公开了一种3D结构光模组及基于该模组的深度图点云图获取方法,3D结构光模组包括红光投射器、红外摄像头、彩色摄像头、图像处理单元和编解码处理单元。所述方法包括以下步骤:图像传感器将采集到的图像数据传送至图像处理单元;图像处理单元对图像数据进行优化处理后发送至编解码处理单元;编解码处理单元对图像数据进行编解码,生成深度图和点云图。将图像处理单元设置于3D结构光模组端,图像处理单元控制自动曝光算法模块的曝光参数,使获得的图像更加清晰;同时,图像处理单元向编解码处理单元传送的图像数据进行了归一化处理,减轻了图像处理单元的数据处理量,降低了图像处理单元的产品成本,同时,提高了编解码处理单元的运算速度。(The invention discloses a 3D structured light module and a depth map point cloud picture acquisition method based on the same, wherein the 3D structured light module comprises a red light projector, an infrared camera, a color camera, an image processing unit and an encoding and decoding processing unit. The method comprises the following steps: the image sensor transmits the acquired image data to the image processing unit; the image processing unit is used for optimizing the image data and then sending the image data to the coding and decoding processing unit; and the coding and decoding processing unit codes and decodes the image data to generate a depth map and a point cloud map. The image processing unit is arranged at the end of the 3D structured light module and controls the exposure parameters of the automatic exposure algorithm module to make the obtained image clearer; meanwhile, the image data transmitted to the coding and decoding processing unit by the image processing unit is subjected to normalization processing, so that the data processing amount of the image processing unit is reduced, the product cost of the image processing unit is reduced, and the operation speed of the coding and decoding processing unit is increased.)

1. A3D structured light module, comprising:

the red light projector projects a plurality of invisible light spots on the shot object to draw a 3D dot matrix image of the shot object;

the infrared camera is used for reading the 3D dot matrix image and shooting a structured light image reflected by the surface of the shot object;

the image processing unit is used for processing the structured light image output by the infrared camera, optimizing the image, and encoding the image into a Yuv format to form image data with codes;

and the coding and decoding processing unit is used for coding the image data with codes into a depth map and a point cloud map.

2. The 3D structured light module of claim 1, wherein the infrared camera comprises an image sensor, the image sensor transmits the collected image signal to the image processing unit, and the image processing unit comprises an automatic exposure algorithm module, and the automatic exposure algorithm module automatically adjusts an exposure amount according to the intensity of the light collected by the image sensor, so that an exposure brightness value approaches a target brightness value set by the image processing unit.

3. The 3D structured light module of claim 1, further comprising an infrared fill-in light for enhancing recognition ability in low light environments, and recognizing a photographed object in low light with the help of invisible infrared light.

4. The 3D structured light module of claim 1, further comprising a color camera and a display module, the color camera for taking 2D color pictures and outputting regular color images; the display module is used for parameter regulation and control in the shooting process and display of the shot images.

5. A depth map and cloud map image acquisition method based on a 3D structured light module is characterized by comprising the following steps:

the infrared camera transmits the acquired image data to the image processing unit;

the image processing unit is used for optimizing the image data and then sending the image data to the coding and decoding processing unit;

and the coding and decoding processing unit codes and decodes the image data to generate a depth map and a point cloud map.

6. The image acquisition method of claim 5, wherein the infrared camera transmits the acquired image data to an image processing unit, comprising the steps of:

the automatic exposure algorithm module automatically adjusts the exposure according to the intensity of light collected by an image sensor in the infrared camera, so that the exposure brightness value is close to the target brightness value set by the image processing unit;

and the image processing unit calibrates the internal parameter and the external parameter.

7. The image acquisition method according to claim 5, wherein said codec processing unit performs codec of the image data, comprising the steps of:

initializing a decoding data interface;

acquiring a data value of depth data and a data value of point cloud data;

a decode data interface is invoked.

8. The image acquisition method of claim 7, wherein the decode data interface is initialized, comprising the steps of:

initializing a decoding library;

the resources of the decoding request are released.

9. The image acquisition method according to claim 7, wherein the invoking of the decoding data interface corresponds to different functions when different parameters are input, and specifically includes:

only a decoding data interface of the infrared camera data is needed, and the corresponding return value is the infrared camera data;

a decoding data interface which needs infrared camera data and depth map data, and corresponding return values are the infrared camera data and the depth map data;

decoding data interfaces of the infrared camera data, the depth map data and the point cloud map data are needed, and corresponding return values are the infrared camera data, the depth map data and the point cloud map data.

10. The image acquisition method of claim 7, wherein the step of invoking the decode data interface further comprises:

carrying out normalization processing on the decoded data, and sending the data to a display module for display;

and the data after the data normalization processing is alternately output by adopting data in a Raw10 format and a Yuv format so as to distinguish the depth map data from the infrared camera data.

Technical Field

The invention relates to the technical field of image processing, in particular to a 3D structured light module and a depth map point cloud map acquisition method based on the same.

Background

The hardware of structured light three-dimensional imaging mainly comprises a camera and a projector, and the structured light is active structure information projected to the surface of a measured object through the projector, such as laser stripes, Gray codes, sine stripes and the like; then, shooting the measured surface through a single or a plurality of cameras to obtain a structured light image; and finally, performing three-dimensional analytic calculation on the image based on a triangulation principle to realize three-dimensional reconstruction.

The existing structured light three-dimensional imaging system has the following problems: 1, image flicker can occur when shooting scenes are switched; 2, distinguish depth map data and infrared camera data through adding the mark bit, need the image processing unit to carry out secondary coding to the data, the mode of doing so the optional image processing unit's of 3D module main control unit is less, and the price of such main control unit is higher relatively moreover.

Disclosure of Invention

In view of this, a 3D structured light module with clear depth map and cloud point map effects and low cost and a depth map point cloud map acquisition method based on the same are provided.

A 3D structured light module, comprising:

the red light projector projects a plurality of invisible light spots on the shot object to draw a 3D dot matrix image of the shot object;

the infrared camera is used for reading the 3D dot matrix image and shooting a structured light image reflected by the surface of the shot object;

the image processing unit is used for processing the structured light image output by the infrared camera, optimizing the image, and encoding the image into a Yuv format to form image data with codes;

and the coding and decoding processing unit is used for coding the image data with codes into a depth map and a point cloud map.

Further, the infrared camera comprises an image Sensor (Sensor), the image Sensor (Sensor) transmits collected image signals to the image processing unit, the image processing unit comprises an Automatic Exposure (AE) algorithm module, and the automatic exposure algorithm module automatically adjusts exposure according to the intensity of light collected by the image Sensor (Sensor) to enable an exposure brightness value to be close to a target brightness value set by the image processing unit.

Further, still include infrared light filling lamp, infrared light filling lamp is used for strengthening the discernment ability under the low light environment, with the help of invisible infrared light, discerns the object of being shot under the low light.

The system further comprises a color camera and a display module, wherein the color camera is used for shooting 2D color pictures and outputting conventional color pictures; the display module is used for parameter regulation and control in the shooting process and display of the shot images.

And an image acquisition method of a depth map and a point cloud map based on the 3D structured light module comprises the following steps:

the infrared camera transmits the acquired image data to the image processing unit;

the image processing unit is used for optimizing the image data and then sending the image data to the coding and decoding processing unit;

and the coding and decoding processing unit codes and decodes the image data to generate a depth map and a point cloud map.

Further, the infrared camera transmits the acquired image data to the image processing unit, and the method comprises the following steps:

the automatic exposure algorithm module automatically adjusts the exposure according to the intensity of light collected by an image Sensor (Sensor) in the infrared camera to enable the exposure brightness value to be close to a target brightness value set by the image processing unit;

and the image processing unit calibrates the internal parameter and the external parameter.

Further, the encoding and decoding processing unit encodes and decodes the image data, and includes the following steps:

initializing a decoding data interface;

acquiring a data value of depth data and a data value of point cloud data;

a decode data interface is invoked.

Further, the decoding data interface is initialized, and the method comprises the following steps:

initializing a decoding library;

the resources of the decoding request are released.

Further, in the invoking of the decoding data interface, when different parameters are input, different functions are corresponded, which specifically includes:

only a decoding data interface of the infrared camera data is needed, and the corresponding return value is the infrared camera data;

a decoding data interface which needs infrared camera data and depth map data, and corresponding return values are the infrared camera data and the depth map data;

decoding data interfaces of the infrared camera data, the depth map data and the point cloud map data are needed, and corresponding return values are the infrared camera data, the depth map data and the point cloud map data.

Further, the step after the step of calling the decoding data interface further includes:

carrying out normalization processing on the decoded data, and sending the data to a display module for display;

and the data after the data normalization processing is alternately output by adopting data in a Raw10 format and a Yuv format so as to distinguish the depth map data from the infrared camera data.

In the 3D structured light module and the depth map point cloud image acquisition method based on the same, the image processing unit ISP is disposed at the 3D structured light module end, and the image processing unit controls the exposure parameters of the automatic exposure algorithm module, so that the acquired image is clearer; meanwhile, the image processing unit carries out normalization processing on the image data transmitted to the coding and decoding processing unit, and data in the Raw10 format and the Yuv format are alternately output, so that the data processing amount of the image processing unit is reduced, the product cost of the image processing unit is reduced, and the operation speed of the coding and decoding processing unit is increased. The method is simple, easy to realize, low in cost and convenient to popularize.

Drawings

Fig. 1 is a block diagram of a 3D structured light module according to an embodiment of the present invention.

Fig. 2 is a flowchart of a depth map point cloud image acquisition method according to an embodiment of the present invention.

Fig. 3 is a flowchart of the codec processing unit according to the embodiment of the present invention for encoding and decoding image data.

FIG. 4 is a diagram illustrating the effect of point clouds according to an embodiment of the invention.

FIG. 5 is a depth effect map of an embodiment of the present invention.

Detailed Description

In this embodiment, taking a 3D structured light module and a depth map point cloud image obtaining method based on the same as examples, the present invention will be described in detail below with reference to specific embodiments and accompanying drawings.

Referring to fig. 1, a 3D structured light module 100 is shown, including:

the red light projector 13 projects a plurality of invisible light spots on the shot object to draw a 3D dot matrix image of the shot object;

the infrared camera 12 is used for reading a 3D dot matrix image and shooting a structured light image reflected by the surface of a shot object;

the color camera 11 is used for shooting 2D color pictures and outputting conventional color images;

the image processing unit 20 is configured to process the structured light image output by the infrared camera 12, perform image optimization, and encode the image into a Yuv format to form encoded image data;

and the encoding and decoding processing unit 30 is configured to encode the encoded image data into a depth map and a point cloud map.

Specifically, the working principle of the 3D structured light module 100 is as follows: and projecting a regular geometric coding pattern to the surface of the object through the infrared light emitter, shooting a structured light image reflected by the surface of the object by using the infrared camera, and calculating and outputting depth information of the surface of the object according to the deformation of the image.

Specifically, during face recognition, the 3D structured light module 100 implements the following functions: recognizing the human face, extracting the human face characteristics and comparing the information.

Further, the infrared camera 12 includes an image Sensor (Sensor), the image Sensor (Sensor) transmits the collected image signal to the image processing unit 20, the image processing unit 20 includes an Automatic Exposure (AE) algorithm module, and the automatic exposure algorithm module automatically adjusts the exposure amount according to the intensity of the light collected by the image Sensor (Sensor), so that the exposure brightness value approaches the target brightness value set by the image processing unit 20.

Specifically, the image processing unit 20 controls the exposure parameters of the automatic exposure algorithm module to make the obtained image clearer.

In particular, auto exposure refers to automatically adjusting the exposure amount according to the intensity of light, preventing overexposure or underexposure, and achieving an appreciation brightness level or a so-called target brightness level in different lighting conditions and scenes so that a captured video or image is neither too dark nor too bright.

In particular, the effects of depth maps and cloud maps depend on the control and post-processing of the red light projector 13, and the control of the red light projector 13 in the 3D structured light module 100 combines the depth map and cloud map effects with the overall module temperature and adjusts to reasonable brightness in post-processing.

Specifically, the power of the red light projector 13 affects the effect of the depth map and the cloud point map, and the higher the power of the red light projector 13 is, the better the obtained image effect is, but the higher the temperature of the 3D structured light module 100 is. In the present technical solution, the 3D structured light module 100 collects image data in the normal indoor environment, outdoor forward light, and backlight scenes, and controls the magnitude of the working current of the red light projector 13 by combining the actual application range.

Further, 3D structure optical module 100 still includes infrared light filling lamp 14 and display module 40, infrared light filling lamp 14 is used for strengthening the discernment ability under the low light environment, with the help of invisible infrared light, discerns under the low light and is shot the object. The display module 40 is used for parameter control in the shooting process and display of the shot image.

Specifically, the depth map is an image in which the distance (depth) from the infrared camera 12 to each point of the object is taken as a pixel value, which directly reflects the geometry of the visible surface of the object.

Specifically, the point data set of the object appearance surface obtained by the infrared camera 12 is referred to as a "point cloud". Each point comprises a three-dimensional coordinate, and after the spatial coordinate of each sampling point on the surface of the object is obtained, a point set called point cloud is obtained.

Specifically, the depth image may be subjected to coordinate transformation to calculate point cloud data, and similarly, the point cloud data may also be subjected to back calculation of the depth image data.

Specifically, the color image shot by the color camera 11 is sent to the image processing unit 20, and the image processing unit 20 encodes the color image into image data in MJPEG format, and sends the image data to the android terminal or the remote platform for display.

Referring to fig. 2 and 3, the present embodiment provides an image obtaining method of a depth map and a cloud map based on a 3D structured light module 100, including the following steps:

in step S100, the infrared camera 12 transmits the acquired image data to the image processing unit 20.

Specifically, an optical image generated by the Lens is projected onto an image Sensor (Sensor) in the infrared camera 12, the image Sensor (Sensor) converts an optical signal into an electrical signal, and then converts the electrical signal into a digital signal through an internal analog-to-digital conversion circuit, and then transmits the digital signal to an image processing unit (ISP) 20 for processing and converting the digital signal into RGB and YUV formats for output.

Specifically, the core of the image Sensor (Sensor) camera is responsible for converting an optical signal passing through the Lens into an electrical signal, and then converting the electrical signal into a digital signal through an internal analog-to-digital conversion circuit. Since each pixel can only sense one of R, G, B, the Data stored in each pixel is monochromatic light, and these most primitive sensed Data are called primitive Data (Raw Data).

Specifically, the image processing unit (ISP) 20 functions to post-process a signal output from a front-end image Sensor (Sensor). The image processing unit (ISP) 20 quickly transfers data obtained by the image Sensor (Sensor) to the codec processing unit 30 in time and refreshes the image Sensor (Sensor), so the quality of the image processing unit (ISP) 20 chip directly affects the picture quality.

The step S100 further includes:

in step S110, the automatic exposure algorithm module automatically adjusts the exposure amount according to the intensity of the light collected by the image Sensor (Sensor) in the infrared camera 12, so that the exposure brightness value approaches the target brightness value set by the image processing unit 20.

Specifically, the image processing unit (ISP) 20 obtains the brightness of the current image output from the image Sensor (Sensor) and then gradually approximates the target brightness value set by the image processing unit (ISP) 20 by using the set exposure value.

In step S120, the image processing unit 20 performs calibration of internal parameters and external parameters.

Specifically, the internal parameters of the camera include focal length, principal point coordinates and distortion parameters, and the external parameters include rotation and translation.

In step S200, the image processing unit 20 performs optimization processing on the image data and then sends the image data to the encoding and decoding processing unit 30.

Specifically, the image processing unit (ISP) 20 performs post-processing on the signal output by the image Sensor (Sensor), and the main functions include linear correction, noise removal, dead pixel removal, interpolation, white balance, automatic exposure control, and the like, so that the on-site details can be better restored under different optical conditions. The image processing unit (ISP) 20 directly affects the picture quality, such as: color saturation, sharpness, fluency, etc.

In step S300, the encoding/decoding processing unit 30 encodes/decodes the image data to generate a depth map and a point cloud map.

The step S300 further includes:

in step S310, the decoding data interface is initialized.

Specifically, the codec processing unit 30 decodes the encoded image data by calling a data interface API (application Programming interface), which is some predefined functions, in order to provide the capability of the application and the developer to access a set of routines based on certain software or hardware without accessing the source code or understanding the details of the internal working mechanism.

The step S310 further includes:

in step S311, a decoding library is initialized.

Step S312, the resource of the decoding request is released.

Specifically, the initialization decoding data interface API is specifically shown in the following table:

in step S320, a data value of the depth data and a data value of the point cloud data are acquired.

Specifically, the API called before decoding is specifically shown in the following table:

step S330, calling a decoding data interface.

Specifically, in the invoking decoding data interface, when different parameters are input, different functions are corresponded, and specifically, three situations are included:

in case one, only a decoding data interface of the infrared camera 12 data is needed, and the corresponding return value is the infrared camera 12 data.

In case two, a decoding data interface for the infrared camera 12 data and the depth map data is required, and the corresponding return values are the infrared camera 12 data and the depth map data.

And in case three, decoding data interfaces of the infrared camera 12 data, the depth map data and the point cloud map data are needed, and the corresponding return values are the infrared camera 12 data, the depth map data and the point cloud map data.

Specifically, the decoding API is specified in the following table:

step S340, performing normalization processing on the decoded data, and sending the normalized data to the display module 40 for display.

The data after the data normalization processing is alternately output by adopting data in a Raw10 format and a Yuv format, so as to distinguish the depth map data from the infrared camera 12 data.

Specifically, Raw format data is Raw data output by an image Sensor (Sensor), generally including Raw8, Raw10, Raw12 and the like, and respectively indicate that one pixel has 8bit data, 10bit data and 12bit data. This is the most Raw data output by the image Sensor (Sensor), and no matter what format the video is finally converted into, the Raw format data needs to be generated first.

Specifically, Yuv format data is data obtained by converting RAW data. The most common of the cameras are the Yuv 422 format, and the Y-U-Y-V format. Taking Yuv 4228 bit as an example, each pixel contains a luminance component (8 bit) and one of the two color components of UV (8 bit). Each pixel point therefore requires 16bit data.

Specifically, the codec processing unit 30 distinguishes data in the Raw10 format and data in the Yuv format by the 9 th bit of each pixel data, where the 9 th bit of each rate limit data of the Yuv format data is 1.

Referring to fig. 4 and 5, a point cloud effect map and a depth effect map of the present invention are shown.

In the 3D structured light module 100 and the depth map point cloud image obtaining method based on the same, the image processing unit (ISP) 20 is disposed at the end of the 3D structured light module 100, and the image processing unit 20 controls the exposure parameters of the automatic exposure algorithm module, so that the obtained image is clearer; meanwhile, the image processing unit 20 performs normalization processing on the image data transmitted to the encoding and decoding processing unit 30, and data in the Raw10 format and the Yuv format are alternately output, so that the data processing amount of the image processing unit 20 is reduced, the product cost of the image processing unit 20 is reduced, and the operation speed of the encoding and decoding processing unit 30 is increased. The method is simple, easy to realize, low in cost and convenient to popularize.

The description and applications of the invention herein are illustrative and are not intended to limit the scope of the invention to the embodiments described above. Variations and modifications of the embodiments disclosed herein are possible, and alternative and equivalent various components of the embodiments will be apparent to those skilled in the art. It will be clear to those skilled in the art that the present invention may be embodied in other forms, structures, arrangements, proportions, and with other components, materials, and parts, without departing from the spirit or essential characteristics thereof. Other variations and modifications of the embodiments disclosed herein may be made without departing from the scope and spirit of the invention.

15页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种基于扫描振镜的成像系统及成像方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类