Shadowing images in three-dimensional content systems

文档序号:1026940 发布日期:2020-10-27 浏览:5次 中文

阅读说明:本技术 对三维内容系统中的图像进行阴影处理 (Shadowing images in three-dimensional content systems ) 是由 安德鲁·伊安·拉塞尔 于 2019-06-10 设计创作,主要内容包括:一种方法包括:接收由第一3D系统生成的三维(3D)信息,所述3D信息包括场景的图像和关于所述场景的深度数据;使用深度数据来识别图像中与满足标准的深度值相关联的第一图像内容;以及通过应用有关所识别的第一图像内容的第一阴影处理来生成修改的3D信息。可以将修改的3D信息提供给第二3D系统。场景可以包含图像中的对象,并且生成修改的3D信息可以包括确定对象的第二图像内容的表面法线,以及基于所确定的表面法线来应用有关第二图像内容的第二阴影处理。对象的部分的深度值可能比另一部分的深度值大,并且可以应用有关图像的部分的第二阴影处理,第二部分位于所述部分处。(One method comprises the following steps: receiving three-dimensional (3D) information generated by a first 3D system, the 3D information including an image of a scene and depth data about the scene; using the depth data to identify first image content in the image associated with depth values that meet a criterion; and generating modified 3D information by applying a first shading process on the identified first image content. The modified 3D information may be provided to a second 3D system. The scene may contain an object in the image, and generating the modified 3D information may include determining a surface normal of a second image content of the object, and applying a second shading process with respect to the second image content based on the determined surface normal. The depth value of a part of the object may be larger than the depth value of another part and a second shading process may be applied with respect to the part of the image, at which the second part is located.)

1. A method, comprising:

receiving 3D information generated by a first three dimensional (3D) system, the 3D information including an image of a scene and depth data about the scene;

using the depth data to identify first image content in the image associated with depth values that meet a criterion; and

modified 3D information is generated by applying a first shading process on the identified first image content.

2. The method of claim 1, wherein the criterion comprises the first image content exceeding a predetermined depth in the scene.

3. The method of claim 1 or 2, wherein applying the first shading process comprises causing the first image content to be rendered black.

4. The method of at least one of the preceding claims, wherein using the predetermined depth and applying the first shading process comprises causing a background of the image to be rendered black.

5. Method according to at least one of the preceding claims, wherein the first shading process depends on depth values of the first image content.

6. The method of at least one of the preceding claims, wherein the criterion comprises that the first image content is closer than a predetermined depth in the scene.

7. Method according to at least one of the preceding claims, wherein the scene contains objects in the image, and wherein generating the modified 3D information further comprises determining a surface normal of a second image content of the objects, and applying a second shading process on the second image content based on the determined surface normal.

8. The method of claim 7, wherein applying the second shading process comprises determining a dot product between the surface normal and a camera vector, and selecting the second shading process based on the determined dot product.

9. The method of claim 7 or 8, wherein applying the second shading process comprises fading the second image content to black based on the second image content facing away in the image.

10. The method of at least one of the preceding claims, wherein the scene contains an object in the image, and a depth value of a first portion of the object in the depth data is larger than a depth value of a second portion of the object, and wherein generating the modified 3D information further comprises applying a second shading process on a portion of the image at which second image content corresponding to the second portion is located.

11. The method of claim 10, wherein applying the second shading process comprises selecting a portion of the image based on a portion of a display used to render the image.

12. The method of claim 11, wherein the object comprises a person, the first portion of the object comprises a face of the person, the second portion of the object comprises a torso of the person, and the portion of the display comprises a bottom of the display.

13. The method of at least one of the preceding claims, further comprising identifying a hole in at least one of the images, wherein generating the modified 3D information comprises applying a second shading process on the hole.

14. The method according to at least one of the preceding claims, wherein generating the modified 3D information further comprises concealing a depth error in the 3D information.

15. The method of at least one of the preceding claims, wherein the depth data is based on Infrared (IR) signals returned from the scene, and wherein generating the modified 3D information comprises applying a second shading process proportional to the intensity of the IR signals.

16. The method according to at least one of the preceding claims, further comprising rendering the modified 3D information stereoscopically at the second 3D system, wherein the first image content has the first shading.

17. The method of claim 16, wherein stereoscopically presenting the modified 3D information comprises overlappingly rendering the image.

18. The method according to at least one of the preceding claims, further comprising providing the modified 3D information to a second 3D system.

19. Method according to at least one of the preceding claims, applied to a 3D content system, in particular a system for at least two users to participate in a remote presentation session using at least two 3D containers for displaying, processing and presenting image information and/or presenting audio information.

20. A system, comprising:

a camera;

a depth sensor; and

a three-dimensional (3D) content module having a processor executing instructions stored in a memory, the instructions causing the processor to identify first image content in an image of a scene included in 3D information using depth data included in the 3D information, the first image content identified as being associated with depth values that satisfy a criterion, and generate modified 3D information by applying first shading on the identified first image content.

21. The system of claim 19, wherein the scene contains an object in the image, and wherein generating the modified 3D information further comprises determining a surface normal of a second image content of the object, and applying a second shading process with respect to the second image content based on the determined surface normal.

22. The system of claim 19, wherein the scene contains an object in the image, and a depth value of a first portion of the object is greater than a depth value of a second portion of the object, and wherein generating the modified 3D information further comprises applying a second shading process with respect to a portion of the image at which second image content corresponding to the second portion is located.

23. System according to at least one of the claims 20 to 22, the system comprising a 3D content system, in particular a system for at least two users to participate in a telepresence session using at least two 3D containers for displaying, processing and presenting image information and/or presenting audio information.

24. The system according to claim 23, the 3D content system comprising a shading processing module, the shading processing module comprising in particular a depth processing component, an angle processing component, a bottom processing component, a hole filling processing component, a depth error component and/or a rendering component.

Technical Field

This document relates generally to shading images in three-dimensional systems.

Background

Developments in the field of computer technology and communication systems have been seen as a way to satisfy the desire for efficient and natural long-distance communications. Video conferencing systems have been introduced in an attempt to provide natural interpersonal interaction between two or more people. However, they typically rely on two-dimensional (2D) images presented on a display, which may mean that the interaction effect is not vivid enough.

Furthermore, the advent of three-dimensional (3D) technology has not led to a sufficient improvement over existing 2D methods. For example, 3D systems may require very complex hardware, such as for capturing content to be broadcast and/or for processing the content.

Disclosure of Invention

In a first aspect, a method comprises: receiving three-dimensional (3D) information generated by a first 3D system, the 3D information including an image of a scene and depth data about the scene; using the depth data to identify first image content in the image associated with depth values that meet a criterion; and generating modified 3D information by applying a first shading process on the identified first image content.

Drawings

Fig. 1 shows an example of a 3D content system.

Fig. 2 shows an example of a 3D content system.

FIG. 3 illustrates an example of depth-based shading.

FIG. 4 illustrates an example of shading based on surface orientation.

FIG. 5 illustrates an example of display location based shading.

Fig. 6A to 6B show an example of shading the background of a 3D image.

Fig. 7 shows an example of hole filling in a 3D image.

Fig. 8 shows an example of correcting a depth error in a 3D image.

Fig. 9A to 9B show examples of rendering a 3D image in superposition.

Fig. 10 to 12 show examples of the method.

FIG. 13 shows an example of a computer device and a mobile computer device that can be used with the described technology.

Like reference symbols in the various drawings indicate like elements.

Implementations can include any or all of the following features. The criterion includes the first image content exceeding a predetermined depth in the scene. Applying the first shading process includes causing the first image content to be rendered black. Using the predetermined depth and applying the first shading process includes causing a background of the image to be rendered black. The first shading process depends on depth values of the first image content. The criterion includes the first image content being closer than a predetermined depth in the scene. The scene contains an object in the image, and wherein generating the modified 3D information further comprises determining a surface normal of a second image content of the object, and applying a second shading process on the second image content based on the determined surface normal. Applying the second shading process includes determining a dot product between the surface normal and the camera vector, and selecting the second shading process based on the determined dot product. Applying the second shading process includes darkening second image content to black based on second image content that is back-to-back in the image. The scene contains an object in the image, and a depth value of a first part of the object in the depth data is larger than a depth value of a second part of the object, and wherein generating the modified 3D information further comprises applying a second shading process on a part of the image at which second image content corresponding to the second part is located. Applying the second shading process includes selecting a portion of the image based on a portion of the display used to render the image. The object includes a person, the first portion of the object includes a face of the person, the second portion of the object includes a torso of the person, and the portion of the display includes a bottom of the display. The method further includes identifying a hole in at least one of the images, wherein generating the modified 3D information includes applying a second shading process with respect to the hole. Generating the modified 3D information further comprises concealing a depth error in the 3D information. The depth data is based on Infrared (IR) signals returned from the scene, and wherein generating the modified 3D information comprises applying a second shading process proportional to the IR signal intensity. The method also includes rendering the modified 3D information stereoscopically at the second 3D system, wherein the first image content has a first shading process. Stereoscopically presenting the modified 3D information includes rendering the images overlappingly. The method also includes providing the modified 3D information to a second 3D system.

In a second aspect, a system comprises: a camera; a depth sensor; and a three-dimensional (3D) content module having a processor that carries out instructions stored in a memory that cause the processor to identify first image content in an image of a scene included in the 3D information using depth data included in the 3D information, the first image content being identified as being associated with depth values that satisfy a criterion, and generate modified 3D information by applying first shading on the identified first image content.

29页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:用于合并牙印模的三维模型的技术

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!