Method, apparatus and program for displaying 3D representation of object based on orientation information of display

文档序号:116757 发布日期:2021-10-19 浏览:35次 中文

阅读说明:本技术 基于显示器的取向信息显示对象的3d表示的方法、设备和程序 (Method, apparatus and program for displaying 3D representation of object based on orientation information of display ) 是由 R·莫塔 L·R·杨格斯 M·金 于 2016-09-23 设计创作,主要内容包括:本公开涉及基于显示器的取向信息显示对象的3D表示的方法、设备和程序。公开了以模拟三维(3D)可见性(包括视差和阴影)的方式显示图形元素的技术。更具体地说,可使用多个图像以构建目标对象的光照模型,每个被捕获的图像与目标3D对象具有已知空间关系。例如,在一个实施方案中,可使用利用球谐函数或半球谐函数的多项式纹理映射(PTM)实现此操作。使用PTM技术可识别相对较少的基础图像。当要显示目标对象时,可使用取向信息生成基础图像的组合以便模拟目标对象的3D表示。(The present disclosure relates to a method, apparatus, and program for displaying a 3D representation of an object based on orientation information of a display. Techniques are disclosed for displaying graphical elements in a manner that simulates three-dimensional (3D) visibility, including parallax and shading. More specifically, multiple images may be used to construct a lighting model of a target object, each captured image having a known spatial relationship to the target 3D object. For example, in one embodiment, this may be accomplished using Polynomial Texture Mapping (PTM) using spherical or hemispherical harmonics. Relatively few base images may be identified using PTM techniques. When a target object is to be displayed, a combination of base images may be generated using the orientation information in order to simulate a 3D representation of the target object.)

1. An electronic device, comprising:

a memory;

a display unit coupled to the memory;

an orientation sensor element; and

one or more processors coupled to the memory, the display unit, and the orientation sensor element, the one or more processors configured to execute program instructions stored in the memory that cause the electronic device to:

obtaining orientation information of the electronic device from the orientation sensor element,

obtaining an image of an object based on a light model of the object and the orientation information, wherein the light model of the object comprises a plurality of images of the object at different viewing angles, and wherein the obtained image is indicative of a three-dimensional representation of the object at a viewing angle corresponding to the orientation information of the electronic device, and

displaying the obtained image of the object on the display unit.

2. The electronic device of claim 1, wherein the orientation information comprises an orientation of the electronic device with respect to a gravitational field.

3. The electronic device of claim 1, wherein program instructions for obtaining the image of the object comprise program instructions for selecting the image from the plurality of images of the object at different viewing angles.

4. The electronic device of claim 1, wherein program instructions to obtain the image of the object comprise program instructions to generate the image based on two or more images of the plurality of images of the object at different viewing angles.

5. The electronic device of claim 4, wherein the two or more images of the plurality of images include a first image and a second image, wherein the first image and the second image include images of the light model of the images at viewing angles that most closely correspond to the orientation information of the electronic device.

6. The electronic device of claim 1, further comprising program instructions stored in the one or more memory devices that cause the electronic device to:

adding a synthetic shadow to the obtained image based on the orientation information to generate a modified image of the object; and

displaying, by the display unit, the modified image of the object.

7. The electronic device of claim 1, wherein the light model of the object comprises a polynomial texture mapping model.

8. The electronic device of claim 1, wherein the light model of the object comprises disparity information.

9. A computer-readable storage medium comprising instructions stored thereon that, when executed, cause one or more processors to:

obtaining orientation information of an electronic device from an orientation sensor element, wherein the electronic device comprises a display unit;

obtaining an image of an object for display based on a light model of the object and the orientation information, wherein the light model of the object comprises a plurality of images of the object at different viewing angles, and wherein the obtained image indicates a three-dimensional representation of the object at a viewing angle corresponding to the orientation information of the electronic device; and

displaying the obtained image of the object on the display unit.

10. The computer-readable storage medium of claim 9, wherein the instructions that cause the one or more processors to obtain orientation information further comprise instructions that cause the one or more processors to determine the orientation information based on a gravitational field.

11. The computer-readable storage medium of claim 9, wherein the instructions that cause the one or more processors to obtain the image of the object further comprise instructions to select an image from the plurality of images of the object at different viewing angles.

12. The computer-readable storage medium of claim 9, wherein the instructions that cause the one or more processors to obtain the image of the object further comprise instructions to generate the image based on two or more images of the plurality of images of the object at different viewing angles.

13. The computer-readable storage medium of claim 12, wherein the two or more images of the plurality of images comprise a first image and a second image, wherein the first image and the second image comprise images of the light model of the images at viewing angles that most closely correspond to the orientation information of the electronic device.

14. The computer-readable storage medium of claim 12, wherein the light model of the object comprises a polynomial texture mapping model.

15. The computer-readable storage medium of claim 9, wherein the light model of the object includes parallax information.

16. A method for displaying a three-dimensional representation of an object, comprising:

obtaining orientation information of the electronic device from the orientation sensor element;

obtaining an image of an object based on a light model of the object and the orientation information, wherein the light model of the object comprises a plurality of images of the object at different viewing angles, and wherein the obtained image is indicative of a three-dimensional representation of the object at a viewing angle corresponding to the orientation information of the electronic device; and

displaying the obtained image of the object on a display unit associated with the electronic device.

17. The method of claim 16, wherein the orientation information comprises an orientation of the electronic device relative to a gravitational field or an orientation of the electronic device relative to a light source.

18. The method of claim 16, wherein the light model of the object comprises a polynomial texture mapping model or disparity information.

19. The method of claim 16, wherein obtaining the image of the object comprises selecting an image from the plurality of images of the object at different viewing angles.

20. The method of claim 16, wherein obtaining the image of the object comprises generating the image based on two or more images of the plurality of images of the object at different viewing angles.

21. The method of claim 20, wherein the two or more images of the plurality of images comprise a first image and a second image, wherein the first image and the second image comprise images of the light model of the images at viewing angles that most closely correspond to the orientation information of the electronic device.

22. The method of claim 16, further comprising:

adding a synthetic shadow to the obtained image based on the orientation information to generate a modified image of the object; and

displaying, by the display unit, the modified image of the object.

Background

The actual display of three-dimensional (3D) objects on two-dimensional (2D) surfaces has been a long-term goal of the image processing field. One way to simulate a 3D object is to take a large number of images that are each illuminated from different locations. A particular image may then be selected and displayed based on the detected light source position (e.g., by an ambient or colored light sensor). Another approach is to take a large number of images each with a 3D object at different positions relative to a fixed light source. Again, a particular image may be selected and displayed (e.g., by using an accelerometer) based on the determined orientation of the 3D object. Another approach combines the two approaches described above such that both illumination position and object orientation can be taken into account. It should be relatively easy to grasp that the number of images required by either of the first two methods may become very large-making it difficult to implement in low memory devices.

Disclosure of Invention

In one embodiment, the disclosed concept provides a method of displaying a three-dimensional (3D) representation of an object based on orientation information. The method includes displaying a first image of an object on a display unit of the electronic device, wherein the first image is indicative of a first 3D representation of the object; determining (based on output from one or more sensors integral to the electronic device) orientation information of the electronic device; determining a second image to display based on the light model and the orientation information of the object; adding the synthetic shadow to the second image based on the orientation information to generate a third image; and displaying a third image of the object on the display unit, wherein the third image is indicative of a second 3D representation of the object-the second 3D representation being different from the first 3D representation.

In one implementation, orientation information may be determined with respect to a gravitational field using, for example, an accelerometer or gyroscope. In another embodiment, the orientation information may be based on the direction of the light. In another embodiment, the image may be captured (in the direction of the light emitted from the display unit) while coinciding with the display time of the first image. The image may then be analyzed to identify certain types of objects, and in turn, the orientation of the electronic device may be determined. By way of example, if the captured image includes a face, the face angle within the captured frame may provide some orientation information. Various types of light models may be used. In one embodiment, the light model may be a Polynomial Texture Mapping (PTM) model. In general, the model may encode or predict the angle of light, and thus the representation of the object, based on the orientation information. In addition to synthetic shadows, disparity information can be incorporated into the model or added as synthetic shadows. Computer executable programs for implementing the disclosed methods may be stored in any medium that is readable and executable by a computer system.

Drawings

Fig. 1 illustrates a two-stage operation according to one embodiment.

Fig. 2A and 2B illustrate two baseline image capture operations according to one embodiment.

Fig. 3 illustrates a light model system according to one embodiment.

Fig. 4 illustrates a light model system according to another embodiment.

Fig. 5 shows a system according to another embodiment.

FIG. 6 illustrates a computer system, according to one embodiment.

Detailed Description

The present disclosure relates to systems, methods, and computer-readable media for displaying graphical elements exhibiting three-dimensional (3D) behavior. In general, techniques are disclosed for displaying graphical elements in a manner that simulates full 3D visibility (including parallax and shading). More specifically, a lighting model of a target object may be constructed using a plurality of captured images each having a known spatial relationship to the target 3D object. For example, in one embodiment, this may be accomplished using Polynomial Texture Mapping (PTM) using spherical or hemispherical harmonics. Relatively few base images may be identified using PTM techniques. When a target object is to be displayed, a combination of base images may be generated using the orientation information in order to simulate a 3D representation of the target object-in some embodiments, shading and parallax distortion are used. The orientation information may be obtained from, for example, an accelerometer or a light sensor.

In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the disclosed concepts. Some of the figures of the present disclosure represent structures and devices in block diagram form as part of this description to avoid obscuring the novel aspects of the disclosed concepts. In the interest of clarity, not all features of an actual implementation are described. Moreover, the language used in the present disclosure has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter. Reference in the present disclosure to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the disclosed subject matter, and multiple references to "one embodiment" or "an embodiment" should not be understood as necessarily all referring to the same embodiment.

It will be appreciated that in the development of any such actual implementation, as in any software and/or hardware development project, numerous decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another. It will be appreciated that such a development effort might be complex and time-consuming, but would nevertheless be a routine undertaking for those of ordinary skill in the art having the benefit of this disclosure for designing and implementing a particular implementation of a graphics processing system.

Referring to fig. 1, techniques in accordance with the present disclosure may be considered to consist of a model development phase 100 and a model deployment phase 105. The model development phase 100 may include the capture of baseline images (block 110) and the development of models of these images (block 115). In one embodiment, the model 120 may include multiple images of the target object captured at different viewing positions and/or illumination angles. In another embodiment, model 120 may include the development of a PTM model based on the captured baseline images. In another embodiment, model 120 may comprise a combination of the captured image and one or more PTM models. Once generated, the model 120 can be deployed to the electronic device 125. As shown, the electronic device 125 may include a communication interface 130, one or more processors 135, graphics hardware 140, a display element or unit 145, device sensors 150, memory 155, an image capture system 160, and all audio systems 165 or backplanes 170 containing one or more continuous (as shown) or discontinuous communication links, which may be coupled via a system bus, according to one embodiment.

The communication interface 130 may be used to connect the electronic device 125 to one or more networks. Exemplary networks include, but are not limited to, local networks such as USB or bluetooth networks, cellular networks, organizational local area networks, and wide area networks such as the internet. Communication interface 130 may use any suitable technology (e.g., wired or wireless technologies) and protocols (e.g., Transmission Control Protocol (TCP), Internet Protocol (IP), User Datagram Protocol (UDP), Internet Control Message Protocol (ICMP), hypertext transfer protocol (HTTP), Post Office Protocol (POP), File Transfer Protocol (FTP), and Internet Message Access Protocol (IMAP)). The one or more processors 135 may be a system-on-chip, such as those found in mobile devices, and include one or more special-purpose Graphics Processing Units (GPUs). The processors 135 may be based on a Reduced Instruction Set Computer (RISC) architecture or a Complex Instruction Set Computer (CISC) architecture, or any other suitable architecture, and each may include one or more processing cores. The graphics hardware 140 may be special-purpose computing hardware for processing graphics and/or assisting the one or more processors 135 in performing computing tasks. In one embodiment, graphics hardware 140 may include one or more programmable GPUs, and each such unit may include one or more processing cores. For example, display 145 may use any type of display technology, such as Light Emitting Diode (LED) technology. Display 145 may provide input and output means suitable for device 125. By way of example, the device sensors 150 may include a 3D depth sensor, a proximity sensor, an ambient light sensor, an accelerometer, and/or a gyroscope. Memory 155 represents both volatile and non-volatile memory. Volatile memory may include one or more different types of media (typically solid-state) used by the processor(s) 135 and graphics hardware 140. For example, the memory 155 may include a memory cache, Read Only Memory (ROM), and/or Random Access Memory (RAM). Memory 155 may also include one or more non-transitory storage media including, for example, magnetic disks (fixed, floppy, and removable disks) and tape, optical media such as CD-ROMs and Digital Video Disks (DVDs), and semiconductor memory devices such as electrically programmable read-only memories (EPROMs) and electrically erasable programmable read-only memories (EEPROMs). Memory 155 may be used to retain media (e.g., audio, image, and video files), preference information, device profile information, computer program instructions organized into one or more modules and written in any desired computer programming language, and any other suitable data. Such computer program code, when executed by one or more processors 135 and/or graphics hardware 140, may implement one or more of the techniques or features described herein. Image capture system 160 may capture still and video images and include one or more image sensors and one or more lens assemblies. The output from image capture system 160 may be processed, at least in part, by: one or more video codecs and/or one or more processors 135 and/or graphics hardware 140, and/or dedicated image processing units incorporated within the image capture system 160. Thus, the captured image may be stored in the memory 155. For example, the electronic device 125 may have two major surfaces. The first surface or front surface may coincide with the display unit 145. The second or rear surface may be the opposite surface. In some embodiments, image capture system 160 may include one or more cameras oriented outward from the first surface and one or more cameras oriented outward from the second surface. The electronic device 125 may be, for example, a mobile phone, a personal media device, a portable camera, or a tablet, laptop, or desktop computer system.

The baseline image capture according to block 110 may include one or two stages. Referring to fig. 2A, stage-1200 may include a 3D target object 205 illuminated from a light source 210. As camera 215 moves along path 225 from location 220A to location 220B to location 220C, relatively more images may be obtained to produce stage-1 image corpus 230. For example, upon moving from position 220A to position 220C, a total of 180 images may be captured (e.g., 1 per motion)°One image). In another embodiment, the camera 215 may move completely around the target object 205. In this embodiment, a total of 360 images may be captured (e.g., 1 per movement)°An image). Referring to fig. 2B, optional stage-2235 includes a 3D target object 205 illuminated by light source 210 that is movable along path 245 from position 240A to position 240B to position 240C, while camera 215 remains in a single position to capture relatively many images (e.g., 150) to generate stage-2 image corpus 250. The number of accurate images required to generate image corpus 230 and image corpus 250 may depend on the desired fidelity of the resulting model-the more accurate the model, the more images will generally be required. In some implementations, the captured images 230 and 250 capture shadow, highlight, and disparity information.

Referring to FIG. 3, in one embodiment, the image corpus 230 may be organized such that each (or at least some) image is associated with its respective viewing angle, as shown in table 300. According to this type of implementation, the mapping between the viewing (capture) angle and the target object 205 may be considered a model (e.g., model 120). During operation (e.g., using the electronic device 125), the viewing angle may be determined using the sensor device 150. Once determined, the corresponding image may be retrieved from memory 155 and displayed using display element 145. In one embodiment, two images in either "side" of the viewing angle provided by the sensor may be combined, for example, a weighted sum, if the viewing angle determined from the sensor output is between the viewing angles captured according to fig. 2A. (the phrase "either side" as used herein refers to the captured image associated with the sensor indicating the viewing angle being closest to the same lower viewing angle and the captured image associated with the sensor indicating the viewing angle being closest to the same higher viewing angle). One of ordinary skill in the art will recognize that the image corpus 230 may remain in a different structure than a single table, as shown in FIG. 3. For example, multiple tables may be used for data comparison and retrieval operations, such as in a relational database or B-tree or other data structure.

Referring to fig. 4, the model generated according to block 115 may apply PTM operation 400 independently to each image in image corpus 250 to produce PTM model 405. During runtime (e.g., using the electronic device 125), the device sensor 150 may be used to determine an illumination angle relative to the target object 205 (e.g., the electronic device 125). Once determined, the corresponding locations may be input to PTM model 405 (e.g., represented by x-locations 415 and y-locations 420 and optionally z-locations, not shown) and used to generate output image 410. In another embodiment, for example, the light angle may be represented in other coordinate systems such as yaw-pitch-roll. In one embodiment, PTM operation 400 may employ a spherical harmonic function (SH). In another embodiment, PTM operation 400 may employ a hemispherical harmonic function (HSH). In other embodiments, different basis functions such as Zernike polynomials, spherical wavelet functions, and Makhotkin hemispherical harmonics may be used, for example. The exact functional relationship or polynomial chosen may be a function of the implemented operating environment, the desired fidelity of the resulting light model, and the amount of memory required for the model.

One feature of PTM operation 400 is that the model 405 it produces may use significantly fewer images than in image corpus 250. Image corpus 250 may include a larger number of high resolution color images (e.g., each corpus may include 50-400 images). In contrast, PTM model 405 may only require a small number of "images" that can generate all images within the scope of the model. For example, in one embodiment, PTM model 405 may take spherical harmonics and derive a polynomial of the form.

pi=a0x2+a1y2+a2xy+a3x+a4y+a5Equation 1

Wherein "pi"represents the model output for pixel" i "for a given illumination location (x, y), and a0To a5Which is the model coefficient, whose value is returned or found by the PTM operation 400. In general, the model coefficients a0To a5May be different for each pixel of the image 410 represented by the x input 415 and the y input 420.

In practice, p is defined by equation 1iOnly the intensity or brightness of the ith pixel in the output image 410 is represented. To reference colors, a color matrix [ C ] may be referenced]So that:

[ P ] ═ C ] [ P ], equation 2

Where [ C ] represents a color value associated with each pixel in the output image [ P ] (e.g., output image 410). In one embodiment, each pixel value in [ C ] may be an average color value of all corresponding pixels in image corpus 250. In another embodiment, each pixel value in [ C ] may be the median of all corresponding pixels in image corpus 250. In another embodiment, each pixel value in chroma image [ C ] may be a weighted average of all corresponding color values in image corpus 250. In another embodiment, the chrominance values from image corpus 250 may be combined in any manner deemed useful for a particular implementation (e.g., non-linear).

Model deployment phase 105 according to fig. 1 may be invoked upon transmission of at least one generated model (e.g., model 300 and/or model 405) to memory 155 of electronic device 125. Once installed on the device 125, the target object 205 may be displayed on the display unit 145. Referring to fig. 5, a system 500 according to another embodiment may employ device sensors 150 to supply inputs (e.g., 415 and 420) to the models 300 and 405. In one embodiment, the device sensor 150 may include an ambient and/or color sensor to identify the location and temperature of the light source. In another embodiment, device sensor 150 includes a gyroscope and/or an accelerometer such that the orientation of device 125 may be determined. If both models 300 and 405 are used, their respective output images may be combined 505 to generate an output image 510. In one embodiment, the combining operation 505 may be a simple merge operation. In another embodiment, the combining operation 505 may represent a weighted combination of the outputs of each model. In another embodiment, combining operation 505 may actually select a model output based on sensor input and/or user input.

Consider by way of another example, the first case where the model 405 is operational and the device sensor 150 indicates that the device 125 is tilted in a directional representation in which the observer looks down at the target object 205 at an angle of about 45 °. If an object is held in a human hand looking directly down at the top of the object, they may desire to see the top surface of the object. When they move their head to a 45 ° angle, they may expect to see less of the top surface of the object and more of the side surface or surfaces. In practice, a sensor input indicative of a 45 ° angle (in x and y coordinates, see fig. 5) may be input to PTM model 405, and output image 510 would be a combination of PTM coefficient images modified to provide color.

Image output according to the present disclosure may include shading, highlights, and parallax to extend this information captured in the generated image corpus. In another embodiment, if shadow information is not included in the image data used to generate the model, a synthetic shadow may be generated (e.g., based on image processing) using the direction (relative to device 125) of the tilted and/or identified light source. An embodiment employing this technique may use the sensor input to generate a first output image from the associated light model (e.g., output image 410 or 510). The image may then be used to generate a synthetic shadow. The synthesized shadows can then be applied to the first output image to generate a final output image that can be displayed, for example, on the display unit 145. In another embodiment, the electronic device 125 may include a camera unit facing outward from the display 145. The camera may then capture and analyze the image (separate from or in combination with device sensor 150) to determine the device orientation and/or input to models 300 and 405. The resulting output image (e.g., image 510) may include shadows captured during model generation or synthesized by image analysis. In one embodiment, the captured image may include a face such that various aspects of the detected face (e.g., the position of the eyes and/or mouth and/or nose) may be used to determine 300 and/or 405 inputs.

Referring to fig. 6, in addition to being deployed on electronic device 125, the disclosed techniques may be developed and deployed on a representative computer system 600 (e.g., a general-purpose computer system such as a desktop, laptop, notebook, or tablet system). Computer system 600 may include one or more processors 605, memory 610(610A and 610B), one or more storage devices 615, graphics hardware 620, device sensors 625 (e.g., 3D depth sensors, proximity sensors, ambient light sensors, colored light sensors, accelerometers, and/or gyroscopes), a communications interface 630, a user interface adapter 635, and a display adapter 640-all of which may be coupled via a system bus or backplane 645. The processor 605 memory 610 (including storage 615), graphics hardware 620, device sensors 625, communication interface 630, and system bus or backplane 645 provide the same or similar functionality as similarly identified elements in fig. 1 and therefore will not be described further. The user interface adapter 635 may be used to connect the keyboard 650, microphone 655, pointing device 660, speaker 665, and other user interface devices such as a touchpad and/or touchscreen (not shown). Display adapter 640 may be used to connect one or more display units 670 (similar in function to display unit 145) that may provide touch input capabilities. System 600 can be used to develop models (e.g., models 120, 300, and 405) consistent with the present disclosure. Thereafter, the developed model can be deployed to computer system 600 or electronic device 125. (in another embodiment, electronic device 125 may provide sufficient computing power to enable model development such that the use of general-purpose computer system 600 is not required.)

It is to be understood that the above description is intended to be illustrative, and not restrictive. The materials have been presented to enable any person skilled in the art to make and use the claimed disclosure, and to provide such materials in the context of particular embodiments, variations of which will be apparent to those skilled in the art (e.g., some of the disclosed embodiments may be used in combination with one another). For example, the deployment of models 120, 300, and 405 may be developed separately or together. In another embodiment, image corpuses 230 and 250 may be combined and used to generate a single light model. In one or more embodiments, one or more steps disclosed may be omitted, repeated, and/or performed in a different order than described herein. The scope of the invention should, therefore, be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled. In the appended claims, the terms "including" and "in which" are used as the plain-english equivalents of the respective terms "comprising" and "in which".

14页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:图像处理方法、装置及设备

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!