Three-dimensional scene modeling method and device, electronic device, readable storage medium and computer equipment
阅读说明:本技术 三维场景建模方法及装置、电子装置、可读存储介质及计算机设备 (Three-dimensional scene modeling method and device, electronic device, readable storage medium and computer equipment ) 是由 程杰 陈岩 于 2018-08-01 设计创作,主要内容包括:本发明公开了一种三维场景建模方法及装置、电子装置、可读存储介质和计算机设备。三维场景建模方法包括采集场景的深度图像,采集场景的可见光图像,处理深度图像及可见光图像以识别场景中的遮挡物体及遮挡物体的类别,根据深度图像指示的测量深度信息、可见光图像指示的测量色彩信息及类别计算遮挡物体的估计深度信息和估计色彩信息,根据测量深度信息、测量色彩信息、估计深度信息和估计色彩信息构建场景的三维色彩模型。本发明实施方式的三维场景建模方法基于测量的深度信息、色彩信息及识别到的遮挡物体的类别共同预估物体被遮挡部分的深度信息和色彩信息,从而补充缺失的深度信息和色彩信息,提升场景三维建模的完整性。(The invention discloses a three-dimensional scene modeling method and device, an electronic device, a readable storage medium and computer equipment. The three-dimensional scene modeling method comprises the steps of collecting a depth image of a scene, collecting a visible light image of the scene, processing the depth image and the visible light image to identify an occlusion object and the type of the occlusion object in the scene, calculating estimated depth information and estimated color information of the occlusion object according to measured depth information indicated by the depth image and measured color information and the type indicated by the visible light image, and constructing a three-dimensional color model of the scene according to the measured depth information, the measured color information, the estimated depth information and the estimated color information. The three-dimensional scene modeling method of the embodiment of the invention estimates the depth information and the color information of the shielded part of the object based on the measured depth information and the color information and the identified category of the shielded object, thereby supplementing the missing depth information and color information and improving the integrity of the three-dimensional scene modeling.)
1. A three-dimensional scene modeling method, characterized by comprising:
acquiring a depth image of the scene;
acquiring a visible light image of the scene;
processing the depth image and the visible light image to identify an occluding object in the scene and a category of the occluding object;
calculating estimated depth information and estimated color information of the shielding object according to the measured depth information indicated by the depth image, the measured color information indicated by the visible light image and the category; and
and constructing a three-dimensional color model of the scene according to the measured depth information, the measured color information, the estimated depth information and the estimated color information.
2. The method of claim 1, wherein the depth image comprises a plurality of depth images having different capture angles, and wherein the method further comprises, prior to the step of processing the depth image and the visible light image to identify an occluding object and a category of the occluding object in the scene:
and splicing a plurality of the depth images to obtain a wide-angle depth image of the scene.
3. The method of claim 2, wherein said stitching the plurality of depth images to obtain the wide-angle depth image of the scene comprises:
determining a reference coordinate system;
converting the measurement depth information into unified depth information under the reference coordinate system; and
and splicing the depth images according to the unified depth information to obtain the wide-angle depth image.
4. The method of claim 2, wherein the visible light images include a plurality of visible light images having different capturing angles, and the plurality of visible light images correspond to the plurality of depth images one-to-one, and wherein the method further comprises, before the step of processing the depth images and the visible light images to identify an occluding object and a category of the occluding object in the scene:
and splicing a plurality of the visible light images to obtain a wide-angle visible light image of the scene.
5. The method of claim 4, wherein the step of processing the depth image and the visible light image to identify occluding objects and classes of the occluding objects in the scene comprises:
processing the wide-angle depth image and the wide-angle visible light image to identify occluding objects in the scene and categories of the occluding objects.
6. The method of claim 5, wherein the step of processing the wide-angle depth image and the wide-angle visible light image to identify occluding objects and classes of the occluding objects in the scene comprises:
processing the wide-angle depth image and the wide-angle visible light image to extract the blocking object; and
and searching a two-dimensional object model corresponding to the shielding object from a two-dimensional object model library comprising a plurality of classes of two-dimensional object models, wherein the class of the two-dimensional object model is the class of the shielding object.
7. The method of claim 6, wherein the step of calculating estimated depth information and estimated color information of the occluding object from the measured depth information indicated by the depth image, the measured color information indicated by the visible light image, and the category comprises:
acquiring the size information of the shielding object according to the unified depth information and the category;
searching a three-dimensional object modeling method corresponding to the category of the occluding object in a three-dimensional object modeling method library including a plurality of three-dimensional object modeling methods, the plurality of three-dimensional object modeling methods corresponding to the plurality of two-dimensional object models one-to-one;
calculating estimated depth information of the shielding object according to the size information, coordinate information corresponding to the unified depth information and the three-dimensional object modeling method; and
and calculating estimated color information of the shielding object according to the measured color information of the shielding object and the two-dimensional object model corresponding to the shielding object.
8. A method for modeling a three-dimensional scene according to claim 3, wherein said step of constructing a three-dimensional color model of said scene from said measured depth information, said measured color information, said estimated depth information and said estimated color information comprises:
constructing a three-dimensional model of the scene according to the unified depth information and the estimated depth information; and
and mapping the three-dimensional model according to the measured color information and the estimated color information to obtain the three-dimensional color model.
9. A three-dimensional scene modeling apparatus, characterized in that the three-dimensional scene modeling apparatus comprises:
a first acquisition module for acquiring a depth image of the scene;
the second acquisition module is used for acquiring a visible light image of the scene;
a processing module to process the depth image and the visible light image to identify an occluding object and a category of the occluding object in the scene;
the calculation module is used for calculating the estimated depth information and the estimated color information of the shielding object according to the measured depth information indicated by the depth image, the measured color information indicated by the visible light image and the type; and
and the construction module is used for constructing a three-dimensional color model of the scene according to the measurement depth information, the measurement color information, the estimation depth information and the estimation color information.
10. An electronic device, comprising:
a depth camera to acquire a depth image of the scene;
a visible light camera to acquire a visible light image of the scene; and
a processor to:
processing the depth image and the visible light image to identify an occluding object in the scene and a category of the occluding object;
calculating estimated depth information and estimated color information of the shielding object according to the measured depth information indicated by the depth image, the measured color information indicated by the visible light image and the category; and
and constructing a three-dimensional color model of the scene according to the measured depth information, the measured color information, the estimated depth information and the estimated color information.
11. The electronic device of claim 10, wherein the depth image comprises a plurality of depth images, the plurality of depth images having different capture angles, and wherein the processor is further configured to:
and splicing a plurality of the depth images to obtain a wide-angle depth image of the scene.
12. The electronic device of claim 11, wherein the processor is further configured to:
determining a reference coordinate system;
converting the measurement depth information into unified depth information under the reference coordinate system; and
and splicing the depth images according to the unified depth information to obtain the wide-angle depth image.
13. The electronic device according to claim 11, wherein the visible light image includes a plurality of visible light images, the plurality of visible light images have different shooting angles, the plurality of visible light images correspond to the plurality of depth images one to one, and the processor is further configured to:
and splicing a plurality of the visible light images to obtain a wide-angle visible light image of the scene.
14. The electronic device of claim 13, wherein the processor is further configured to:
processing the wide-angle depth image and the wide-angle visible light image to identify occluding objects in the scene and categories of the occluding objects.
15. The electronic device of claim 14, wherein the processor is further configured to:
processing the wide-angle depth image and the wide-angle visible light image to extract the blocking object; and
and searching a two-dimensional object model corresponding to the shielding object from a two-dimensional object model library comprising a plurality of classes of two-dimensional object models, wherein the class of the two-dimensional object model is the class of the shielding object.
16. The electronic device of claim 15, wherein the processor is further configured to:
acquiring the size information of the shielding object according to the unified depth information and the category;
searching a three-dimensional object modeling method corresponding to the category of the occluding object in a three-dimensional object modeling method library including a plurality of three-dimensional object modeling methods, the plurality of three-dimensional object modeling methods corresponding to the plurality of two-dimensional object models one-to-one;
calculating estimated depth information of the shielding object according to the size information, coordinate information corresponding to the unified depth information and the three-dimensional object modeling method; and
and calculating estimated color information of the shielding object according to the measured color information of the shielding object and the two-dimensional object model corresponding to the shielding object.
17. The electronic device of claim 12, wherein the processor is further configured to:
constructing a three-dimensional model of the scene according to the unified depth information and the estimated depth information; and
and mapping the three-dimensional model according to the measured color information and the estimated color information to obtain the three-dimensional color model.
18. One or more non-transitory computer-readable storage media containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform the method of modeling a three-dimensional scene of any of claims 1 to 8.
19. A computer device comprising a memory and a processor, the memory having stored therein computer-readable instructions that, when executed by the processor, cause the processor to perform the method of modeling a three-dimensional scene of any of claims 1 to 8.
Technical Field
The present invention relates to the field of three-dimensional modeling technologies, and in particular, to a three-dimensional scene modeling method, a three-dimensional scene modeling apparatus, an electronic apparatus, a non-volatile computer-readable storage medium, and a computer device.
Background
In the existing three-dimensional scene modeling, a depth image of a scene is usually shot through a depth camera, a two-dimensional visible light image is shot through a visible light camera, and the scene is modeled three-dimensionally by combining depth information of the depth image and color information of the two-dimensional visible light image. However, an occluded object inevitably exists in the scene, and the depth camera cannot acquire the depth information of the occluded part of the object in the scene, so that the three-dimensional modeling of the occluded part of the object cannot be performed during the three-dimensional scene modeling, and the integrity of the three-dimensional modeling of the image scene is improved.
Disclosure of Invention
The embodiment of the invention provides a three-dimensional scene modeling method, a three-dimensional scene modeling device, an electronic device, a non-volatile computer-readable storage medium and computer equipment.
The three-dimensional scene modeling method of the embodiment of the invention comprises the following steps:
acquiring a depth image of the scene;
acquiring a visible light image of the scene;
processing the depth image and the visible light image to identify an occluding object in the scene and a category of the occluding object;
calculating estimated depth information and estimated color information of the shielding object according to the measured depth information indicated by the depth image, the measured color information indicated by the visible light image and the category; and
and constructing a three-dimensional color model of the scene according to the measured depth information, the measured color information, the estimated depth information and the estimated color information.
The three-dimensional scene modeling device comprises a first acquisition module, a second acquisition module, a processing module, a calculation module and a construction module. The first acquisition module is used for acquiring a depth image of the scene. The second acquisition module is used for acquiring a visible light image of the scene. A processing module is used to process the depth image and the visible light image to identify an occluding object and a category of the occluding object in the scene. The calculation module is used for calculating the estimated depth information and the estimated color information of the shielding object according to the measured depth information indicated by the depth image, the measured color information indicated by the visible light image and the category. The construction module is used for constructing a three-dimensional color model of the scene according to the measurement depth information, the measurement color information, the estimation depth information and the estimation color information.
An electronic device of an embodiment of the invention includes a depth camera, a visible light camera, and a processor. The depth camera is used to acquire a depth image of the scene. The visible camera is used to acquire a visible image of the scene. The processor is configured to process the depth image and the visible light image to identify an occluding object and a category of the occluding object in the scene, calculate estimated depth information and estimated color information of the occluding object from measured depth information indicated by the depth image, measured color information indicated by the visible light image, and the category, and construct a three-dimensional color model of the scene from the measured depth information, the measured color information, the estimated depth information, and the estimated color information.
One or more non-transitory computer-readable storage media embodying computer-executable instructions that, when executed by one or more processors, cause the processors to perform the three-dimensional scene modeling method described above.
The computer device of the embodiment of the invention comprises a memory and a processor, wherein the memory stores computer readable instructions, and the instructions, when executed by the processor, enable the processor to execute the three-dimensional scene modeling method.
The three-dimensional scene modeling method, the three-dimensional scene modeling device, the electronic device, the nonvolatile computer readable storage medium and the computer device of the embodiment of the invention estimate the depth information and the color information of the shielded part of the shielded object based on the measured depth information and the color information of the scene and the identified three parameters of the type of the shielded object, so that the depth information and the color information of the shielded part of the shielded object in the scene are supplemented, and the completeness of the three-dimensional modeling of the scene is improved.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a flow diagram of a method for modeling a three-dimensional scene in accordance with certain embodiments of the invention.
FIG. 2 is a block schematic diagram of a three-dimensional scene modeling apparatus in accordance with certain embodiments of the present invention.
Fig. 3 is a schematic structural diagram of an electronic device according to some embodiments of the invention.
FIG. 4 is a flow diagram of a method for modeling a three-dimensional scene in accordance with certain implementations of the invention.
FIG. 5 is a flow diagram of a method for modeling a three-dimensional scene in accordance with certain implementations of the invention.
FIG. 6 is a block schematic diagram of a three-dimensional scene modeling apparatus in accordance with certain embodiments of the invention.
FIG. 7 is a block diagram of a stitching module of a three-dimensional scene modeling apparatus in accordance with certain embodiments of the present invention.
FIG. 8 is a scene schematic of a three-dimensional scene modeling method of some embodiments of the invention.
FIG. 9 is a flow diagram of a method for modeling a three-dimensional scene in accordance with certain implementations of the invention.
FIG. 10 is a flow diagram of a method for modeling a three-dimensional scene in accordance with certain implementations of the invention.
FIG. 11 is a block schematic diagram of a three-dimensional scene modeling apparatus in accordance with certain embodiments of the invention.
FIG. 12 is a block schematic diagram of a processing unit of a three-dimensional scene modeling apparatus in accordance with certain embodiments of the invention.
FIG. 13 is a flow diagram illustrating a method for modeling a three-dimensional scene in accordance with certain implementations of the invention.
FIG. 14 is a block diagram of the computational modules of the three-dimensional scene modeling apparatus in accordance with certain embodiments of the present invention.
FIG. 15 is a flow diagram of a method for modeling a three-dimensional scene in accordance with certain implementations of the invention.
FIG. 16 is a block diagram of a stitching module of a three-dimensional scene modeling apparatus in accordance with certain embodiments of the present invention.
FIG. 17 is a block diagram of a computer device according to some embodiments of the invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
Referring to fig. 1, the present invention provides a three-dimensional scene modeling method. The three-dimensional scene modeling method comprises the following steps:
s1: acquiring a depth image of a scene;
s3: collecting a visible light image of a scene;
s5: processing the depth image and the visible light image to identify an occluding object and a category of the occluding object in the scene;
s7: calculating estimated depth information and estimated color information of the shielding object according to the measured depth information indicated by the depth image, the measured color information indicated by the visible light image and the type; and
s9: and constructing a three-dimensional color model of the scene according to the measured depth information, the measured color information, the estimated depth information and the estimated color information.
Referring to fig. 2, the present invention further provides a three-dimensional
That is, the
Referring to fig. 3, the present invention also provides an
That is,
The
The
The
The
The existing three-dimensional scene modeling method generally includes acquiring a plurality of depth images and a plurality of visible light images of a scene, and then performing three-dimensional modeling on the scene based on the depth images and the visible light images. When a scene is modeled three-dimensionally, each object in the scene is generally modeled, so that a plurality of three-dimensional object models are obtained, and the plurality of three-dimensional object models form the three-dimensional model of the whole scene. However, when the complexity of the scene is high, for example, when there are many objects in the scene or the placement positions of the objects are disordered, even when a plurality of depth images are captured, the objects in the scene may be partially blocked. As shown in fig. 8, two throw pillows are stacked on the sofa, and a partial area of one of the two throw pillows is covered by the other throw pillow. At this time, neither the depth image nor the visible light image of the part of the object which is shielded can be acquired, so that a complete three-dimensional object model cannot be modeled when the three-dimensional modeling of the scene is subsequently performed, and the integrity of the three-dimensional modeling of the scene is influenced.
According to the three-dimensional scene modeling method, the
The color information includes color information and black-and-white information. The color information refers to red, yellow, blue, green and other colors, and the black and white information includes black, white, gray and other colors.
In summary, the three-dimensional scene modeling method, the three-dimensional
Referring to fig. 4 and 5 together, in some embodiments, the three-dimensional scene modeling method according to the embodiment of the present invention further includes, before step S5:
s41: and splicing the multiple depth images to obtain a wide-angle depth image of the scene.
Wherein, step S41 includes:
s411: determining a reference coordinate system;
s412: converting the measured depth information into unified depth information under a reference coordinate system; and
s413: and splicing the depth images according to the unified depth information to obtain a wide-angle depth image.
Referring to fig. 6 and 7 together, in some embodiments, the three-dimensional
That is,
Referring back to fig. 3, in some embodiments, step S41, step S411, step S412, and step S413 may also be implemented by the
Specifically, due to the limited field angles of the
After acquiring the depth images, the
For each depth image, there is a pixel coordinate system u-v (i.e., with the vertex at the top left of the sensor array of the
For shooting depth images at different viewing angles, the
Subsequently, the
It should be noted that, performing the stitching of the depth images based on the matching relationship of the coordinates requires that the resolution of the depth images needs to be greater than a preset resolution. It is understood that if the resolution of the depth image is low, the accuracy of the coordinates (x, y, z) is relatively low, and at this time, matching is performed directly according to the coordinates, and there may be a problem that the point P1 and the point P2 do not actually coincide but differ by an offset, and the value of the offset exceeds the error threshold. If the resolution of the image is high, the accuracy of the coordinates (x, y, z) is relatively high, and at this time, matching is performed directly according to the coordinates, and even though the point P1 and the point P2 do not actually coincide with each other and differ by an offset, the value of the offset is smaller than the error limit value, that is, within the error allowable range, and the splicing of the depth image is not greatly affected.
Referring to fig. 4, after the splicing of the depth images, in some embodiments, the three-dimensional scene modeling method according to an embodiment of the present invention further includes, before step S5:
s42: and splicing the plurality of visible light images to obtain a wide-angle visible light image of the scene.
Referring back to fig. 6, step S42 can also be implemented by the
Referring back to fig. 3, step S42 can also be implemented by the
It will be appreciated that since each depth image corresponds to a visible light image, for example, depth image A1 in the above example corresponds to visible light image A2, depth image B1 corresponds to visible light image B2, depth image C1 corresponds to visible light image C2, and depth image D1 corresponds to visible light image D2. Then, further, taking the stitching of the depth image a1 and the depth image B1 as an example, in the stitching process of the depth image a1 and the depth image B1, a part of pixel points P1 in the depth image and a part of pixel points P2 in the depth image B2 are overlapped, and then the stitching of the visible light image performs matching of pixel points in the visible light image according to matching information of the pixel points in the depth image stitching process, so that the stitching of the visible light image is realized.
Specifically, the visible light image a2 and the visible light image B2 are spliced as an example. Due to the relative position between the
Of course, in some embodiments, the
wherein, muA2And muB2Respectively, mean values, σ, of the visible light image A2 and the visible light image B2A2And σB2Respectively representing the variances, σ, of the visible-light image A2 and the visible-light image B2A2B2Representing the covariance of visible image a2 and visible image B2. C1、C2、C3Is a constant.
The expression of the average structural similarity SSIM is: SSIM (a2, B2) ═ l (a2, B2) × c (a2, B2) × s (a2, B2). The SSIM has a value range of [0,1], and the larger the value is, the smaller the distortion of the visible light image a2 and the visible light image B2 is, and the higher the similarity is.
After the registration parameters are selected, the
Referring to fig. 9 and 10 together, in some embodiments, after stitching the depth image and the visible light image, the processing step S5 for the depth image and the visible light image to identify the blocking object and the category of the blocking object in the scene includes:
s51: the wide-angle depth image and the wide-angle visible light image are processed to identify occluding objects and classes of occluding objects in the scene.
Wherein, the step S51 further includes:
s511: processing the wide-angle depth image and the wide-angle visible light image to extract a shielding object; and
s512: and searching a two-dimensional object model corresponding to the shielding object from a two-dimensional object model library comprising a plurality of classes of two-dimensional object models, wherein the class of the two-dimensional object model is the class of the shielding object.
Referring to fig. 11 and 12 together, in some embodiments, the
Referring back to fig. 3, in some embodiments, step S51, step S511, and step S512 may be implemented by the
Wherein the library of two-dimensional object models is stored in the
Specifically, the
The
Referring to fig. 13, in some embodiments, the step S7 of calculating the estimated depth information and the estimated color information of the blocking object according to the measured depth information indicated by the depth image, the measured color information indicated by the visible light image, and the category includes:
s71: acquiring the size information of the shielding object according to the unified depth information and the category;
s72: searching a three-dimensional object modeling method corresponding to the category of the occlusion object in a three-dimensional object modeling method library including a plurality of three-dimensional object modeling methods, the plurality of three-dimensional object modeling methods corresponding to the plurality of two-dimensional object models one-to-one;
s73: calculating estimated depth information of the shielding object according to the size information, coordinate information corresponding to the unified depth information and a three-dimensional object modeling method; and
s74: and calculating estimated color information of the shielding object according to the measured color information of the shielding object and the two-dimensional object model corresponding to the shielding object.
Referring to fig. 14, in some embodiments, the
That is, the obtaining
Referring back to fig. 3, in some embodiments, step S71, step S72, step S73 and step S74 may be implemented by the
Wherein the library of three-dimensional object modeling methods is stored in the
Specifically, after identifying the blocking object and the type of the blocking object, the
Therefore, the depth information and the color information of the shielded part of the shielded object can be supplemented, and the complete depth information and the complete color information of the whole scene can be obtained.
Referring to fig. 15, in some embodiments, the step S9 of constructing a three-dimensional color model of the scene according to the measured depth information, the measured color information, the estimated depth information, and the estimated color information includes:
s91: constructing a three-dimensional model of the scene according to the unified depth information and the estimated depth information; and
s92: and mapping the three-dimensional model according to the measured color information and the estimated color information to obtain the three-dimensional color model.
Referring to FIG. 16, in some embodiments, the
Referring back to fig. 3, in some embodiments, steps S91 and S92 may also be implemented by the
In particular, after acquiring the normalized depth information and the estimated depth information,
In some embodiments, after the processor splices the plurality of depth images to obtain the wide-angle depth image, the processor may not splice the visible light images, but directly perform operations of extracting and identifying the blocking object, calculating the estimated depth information and the estimated color information based on the wide-angle depth image and the plurality of visible light images, and when a three-dimensional model is subsequently mapped, map the three-dimensional model based on the measured color information indicated by the plurality of visible light images and the calculated estimated color information directly, so as to obtain the three-dimensional color model of the scene finally.
In some embodiments, the processor may also directly perform operations of extracting and identifying the occlusion object, estimating depth information, and calculating estimated color information based on the depth image and the visible light image, subsequently perform stitching of the depth image based on the measured depth information and the estimated depth information, stitching of the visible light image based on the measured color information and the estimated color information, finally perform construction of a three-dimensional model based on the wide-angle depth image, perform mapping of the three-dimensional model based on the wide-angle visible light image, and finally obtain the three-dimensional color model of the scene.
In some embodiments, the processor may also directly perform operations of extracting and identifying the blocking object, estimating depth information, and calculating estimated color information based on the depth image and the visible light image, then perform stitching of the depth image based on the measured depth information and the estimated depth information to obtain a wide-angle depth image, finally perform construction of a three-dimensional model based on the wide-angle depth image, perform mapping of the three-dimensional model based on a plurality of visible light images, and finally obtain the three-dimensional color model of the scene.
Referring to fig. 17, the present invention further provides a
The
For example, when the instructions are executed by
controlling the
controlling the
processing the depth image and the visible light image to identify an occluding object and a category of the occluding object in the scene;
calculating estimated depth information and estimated color information of the shielding object according to the measured depth information indicated by the depth image, the measured color information indicated by the visible light image and the type; and
and constructing a three-dimensional color model of the scene according to the measured depth information, the measured color information, the estimated depth information and the estimated color information.
For another example, when the instructions are executed by the
determining a reference coordinate system;
converting the measured depth information into unified depth information under a reference coordinate system; and
and splicing the depth images according to the unified depth information to obtain a wide-angle depth image.
The present invention also provides one or more non-transitory computer-readable storage media containing computer-executable instructions. The computer-executable instructions, when executed by the one or
For example, the computer-executable instructions, when executed by the one or
controlling the
controlling the
processing the depth image and the visible light image to identify an occluding object and a category of the occluding object in the scene;
calculating estimated depth information and estimated color information of the shielding object according to the measured depth information indicated by the depth image, the measured color information indicated by the visible light image and the type; and
and constructing a three-dimensional color model of the scene according to the measured depth information, the measured color information, the estimated depth information and the estimated color information.
As another example, the computer-executable instructions, when executed by the one or
determining a reference coordinate system;
converting the measured depth information into unified depth information under a reference coordinate system; and
and splicing the depth images according to the unified depth information to obtain a wide-angle depth image.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and alternate implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.